instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
---|---|---|
PyMySQL different updates in one query?
|
So I have a python script that goes through roughly 350,000 data objects, and depending on some tests, it needs to update a row which represents each one of those objects in a MySQl db. I'm also using pymysql as I've had the least trouble with it especially when sending over large select queries (select statements with where column IN (....) clause that can contain 100,000+ values).
Since each update for each row can be different, each update statement is different. For example, for one row we might want to update first_name but for another row we want to leave first_name untouched and we want to update last_name.
This is why I don't want to use the cursor.executemany() method which takes in one generic update statement and you then feed it the values however as I mentioned, each update is different so having one generic update statement doesn't really work for my case. I also don't want to send over 350,000 update statements individually over the wire. Is there anyway I can package all of my update statements together and send them at once?
I tried having them all in one query and using the cursor.execute() method but it doesn't seem to update all the rows.
|
Your best performance will be if you can encode your "tests" into the SQL logic itself, so you can boil everything down to a handful of UPDATE statements. Or at least get as many as possible done that way, so that fewer rows need to be updated individually.
For example:
UPDATE tablename set firstname = [some logic]
WHERE [logic that identifies which rows need the firstname updated];
You don't describe much about your tests, so it's hard to be sure. But you can typically get quite a lot of logic into your WHERE clause with a little bit of work.
Another option would be to put your logic into a stored procedure. You'll still be doing 350,000 updates, but at least they aren't all "going over the wire". I would use this only as a last resort, though; business logic should be kept in the application layer whenever possible, and stored procedures make your application less portable.
|
Implementing an iterator in Julia for an animation with PyPlot
|
I am just trying to reproduce this simple example of an animation in Matplotlib but using PyPlot in Julia. I am having difficulties with the definition of the iterator simData() that is passed to the function funcAnimation , because it seems that PyPlot doesn't recognize the iterator that I defined in Julia (via a Task) as such.
Here is my approach to define the same function simData():
function simData()
t_max = 10.0
dt = 0.05
x = 0.0
t = 0.0
function it()
while t < t_max
x = sin(pi*t)
t = t+dt
produce(x,t)
end
end
Task(it)
end
As you can check, this kind of iterator yields in theory the same values than the python simData() generator of the example (try for example collect(simData()). However, I got this error when I try to do the animation
LoadError: PyError (:PyObject_Call) <type 'exceptions.TypeError'>
TypeError('PyCall.jlwrap object is not an iterator',)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 1067, in __init__
TimedAnimation.__init__(self, fig, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 913, in __init__
*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 591, in __init__
self._init_draw()
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 1092, in _init_draw
self._draw_frame(next(self.new_frame_seq()))
while loading In[5], in expression starting on line 42
in pyerr_check at /home/diegotap/.julia/v0.4/PyCall/src/exception.jl:56
[inlined code] from /home/diegotap/.julia/v0.4/PyCall/src/exception.jl:81
in pycall at /home/diegotap/.julia/v0.4/PyCall/src/PyCall.jl:402
in call at /home/diegotap/.julia/v0.4/PyCall/src/PyCall.jl:429
As I mentioned, I think the problem is that the Julia iterator is not recognized as such by Python. Do you have any idea about how to fix that?
PS: Here is a Jupyter notebook with the full code that I used to do the animation.
|
In your code, you invoke FuncAnimation this way:
ani = anim.FuncAnimation(fig, simPoints, simData, blit = false, interval=10, repeat= true)
In the original code, simData was a generator, but in your code it isn't, it returns a generator, so I'd expect your code to invoke it this way:
ani = anim.FuncAnimation(fig, simPoints, simData(), blit = false, interval=10, repeat= true)
Let's finish the problem -- since we can't get Python to recognize the return value of simData() as an interator, we'll ignore that feature and have simPoints() call simData() to launch the task and then return a function for Python to animate:
using PyCall
using PyPlot
pygui(true)
@pyimport matplotlib.animation as animation
function simData()
t_max = 10.0
dt = 0.05
x = 0.0
t = -dt
function it()
while t < t_max
x = sin(pi * t)
t = t + dt
produce(x, t)
end
end
Task(it)
end
function simPoints()
task = simData()
function points(frame_number)
x, t = consume(task)
line[:set_data](t, x)
return(line, "")
end
points
end
figure = plt[:figure]()
axis = figure[:add_subplot](111)
line = axis[:plot]([], [], "bo", ms = 10)[1]
axis[:set_ylim](-1, 1)
axis[:set_xlim](0, 10)
ani = animation.FuncAnimation(figure, simPoints(), blit=false, interval=10, frames=200, repeat=false)
plt[:show]()
This works for one pass of the bouncing ball across the graph and stops when it hits the right edge (unlike the original Python which repeats).
|
How to install cryptography on ubuntu?
|
My ubuntu is 14.04 LTS.
When I install cryptography, the error is:
Installing egg-scripts.
uses namespace packages but the distribution does not require setuptools.
Getting distribution for 'cryptography==0.2.1'.
no previously-included directories found matching 'documentation/_build'
zip_safe flag not set; analyzing archive contents...
six: module references __path__
Installed /tmp/easy_install-oUz7ei/cryptography-0.2.1/.eggs/six-1.10.0-py2.7.egg
Searching for cffi>=0.8
Reading https://pypi.python.org/simple/cffi/
Best match: cffi 1.5.0
Downloading https://pypi.python.org/packages/source/c/cffi/cffi-1.5.0.tar.gz#md5=dec8441e67880494ee881305059af656
Processing cffi-1.5.0.tar.gz
Writing /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/setup.cfg
Running cffi-1.5.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/egg-dist-tmp-A2kjMD
c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory
#include <ffi.h>
^
compilation terminated.
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
An error occurred when trying to install cryptography 0.2.1. Look above this message for any errors that were output by easy_install.
While:
Installing egg-scripts.
Getting distribution for 'cryptography==0.2.1'.
Error: Couldn't install: cryptography 0.2.1
I don't know why it was failed. What is the reason. Is there something necessary when install it on ubuntu system?
|
The answer is on the docs of cryptography's installation section which pretty much reflects Angelos' answer:
Quoting it:
For Debian and Ubuntu, the following command will ensure that the
required dependencies are installed:
$ sudo apt-get install build-essential libssl-dev libffi-dev python-dev
For Fedora and RHEL-derivatives, the following command will ensure
that the required dependencies are installed:
$ sudo yum install gcc libffi-devel python-devel openssl-devel
You should now be able to build and install cryptography with the
usual
$ pip install cryptography
|
Tensorflow python : Accessing individual elements in a tensor
|
This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge)
Is there any method to do the same faster?
Thanks in Advance.
|
There are two main ways to access subsets of the elements in a tensor, either of which should work for your example.
Use the indexing operator (based on tf.slice()) to extract a contiguous slice from the tensor.
input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
output = input[0, :]
print sess.run(output) # ==> [1 2 3]
The indexing operator supports many of the same slice specifications as NumPy does.
Use the tf.gather() op to select a non-contiguous slice from the tensor.
input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
output = tf.gather(input, 0)
print sess.run(output) # ==> [1 2 3]
output = tf.gather(input, [0, 2])
print sess.run(output) # ==> [[1 2 3] [7 8 9]]
Note that tf.gather() only allows you to select whole slices in the 0th dimension (whole rows in the example of a matrix), so you may need to tf.reshape() or tf.transpose() your input to obtain the appropriate elements.
|
Fast way of crossing strings in a list
|
If have a list like so:
shops=['A','B','C','D']
And would like to create the following new lists (I cross each element with every other and create a string where first part is alphanumerically before the second):
['A-B', 'A-C', 'A-D']
['A-B', 'B-C', 'B-D']
['A-C', 'B-C', 'C-D']
['A-D', 'B-D', 'C-D']
I have something like this:
for a in shops:
cons = []
for b in shops:
if a!=b:
con = [a,b]
con = sorted(con, key=lambda x: float(x))
cons.append(con[0]+'-'+con[1])
print(cons)
However, this is pretty slow for large lists (e.g. 1000 where I have 1000*999*0.5 outputs). I was looking for a more efficient way of doing this?
I could have used an if-else clause for the sort e.g.
for a in shops:
cons = []
for b in shops:
if a<b:
cons.append(a+"-"+b)
elif a>b:
cons.append(b+"-"+a)
print(cons)
Which, I haven't timed yet - however I thought the main slow-down was the double for-loop
|
You can create a nested list-comprehension with some additional checks:
>>> shops=['A','B','C','D']
>>> [["-".join((min(a,b), max(a,b))) for b in shops if b != a] for a in shops]
[['A-B', 'A-C', 'A-D'],
['A-B', 'B-C', 'B-D'],
['A-C', 'B-C', 'C-D'],
['A-D', 'B-D', 'C-D']]
Note that this will probably not be much faster than your code, as you still have to generate all those combinations. In practice, you could make it a generator expression, so the elements are not generated all at once but only "as needed":
gen = (["-".join((min(a,b), max(a,b))) for b in shops if b != a] for a in shops)
for item in gen:
print(item)
Update: I did some timing analysis using IPython's %timeit. Turns out your second implementation is the fastest. Tested with a list of 100 strings (map(str, range(100))) and after turning each of the methods into generators.
In [32]: %timeit list(test.f1()) # your first implementation
100 loops, best of 3: 13.5 ms per loop
In [33]: %timeit list(test.f2()) # your second implementation
1000 loops, best of 3: 1.63 ms per loop
In [34]: %timeit list(test.g()) # my implementation
100 loops, best of 3: 3.49 ms per loop
You can speed it up by using a simple if/else instead of min/max, as in your 2nd implementation, then they are about equally fast.
(["-".join((a,b) if a < b else (b,a)) for b in shops if b != a] for a in shops)
|
In TensorFlow is there any way to just initialize uninitialised variables?
|
The standard way of initializing variables in TensorFlow is
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
After running some learning for a while I create a new set of variables but once I initialize them it resets all my existing variables. At the moment my way around this is to save all the variable I need and then reapply them after the tf.initalize_all_variables call. This works but is a bit ugly and clunky. I cannot find anything like this in the docs...
Does anyone know of any good way to just initialize the uninitialized variables?
|
There is no elegant* way to enumerate the uninitialized variables in a graph. However, if you have access to the new variable objects—let's call them v_6, v_7, and v_8—you can selectively initialize them using tf.initialize_variables():
init_new_vars_op = tf.initialize_variables([v_6, v_7, v_8])
sess.run(init_new_vars_op)
* A process of trial and error could be used to identify the uninitialized variables, as follows:
uninitialized_vars = []
for var in tf.all_variables():
try:
sess.run(var)
except tf.errors.FailedPreconditionError:
uninitialized_vars.append(var)
init_new_vars_op = tf.initialize_variables(uninitialized_vars)
# ...
...however, I would not condone such behavior :-).
|
Is there a pythonic way to skip decoration on a subclass' method?
|
I have an class which decorates some methods using a decorator from another library. Specifically, the class subclasses flask-restful resources, decorates the http methods with httpauth.HTTPBasicAuth().login_required(), and does some sensible defaults on a model service.
On most subclasses I want the decorator applied; therefore I'd rather remove it than add it in the subclasses.
My thought is to have a private method which does the operations and a public method which is decorated. The effects of decoration can be avoided by overriding the public method to call the private one and not decorating this override. Mocked example below.
I am curious to know if there's a better way to do this. Is there a shortcut for 'cancelling decorators' in python that gives this effect?
Or can you recommend a better approach?
Some other questions have suitable answers for this, e.g. Is there a way to get the function a decorator has wrapped?. But my question is about broader design - i am interested in any pythonic way to run the operations in decorated methods without the effects of decoration. E.g. my example is one such way but there may be others.
def auth_required(fn):
def new_fn(*args, **kwargs):
print('Auth required for this resource...')
fn(*args, **kwargs)
return new_fn
class Resource:
name = None
@auth_required
def get(self):
self._get()
def _get(self):
print('Getting %s' %self.name)
class Eggs(Resource):
name = 'Eggs'
class Spam(Resource):
name = 'Spam'
def get(self):
self._get()
# super(Spam, self)._get()
eggs = Eggs()
spam = Spam()
eggs.get()
# Auth required for this resource...
# Getting Eggs
spam.get()
# Getting Spam
|
Flask-HTTPAuth uses functools.wraps in the login_required decorator:
def login_required(self, f):
@wraps(f)
def decorated(*args, **kwargs):
...
From Python 3.2, as this calls update_wrapper, you can access the original function via __wrapped__:
To allow access to the original function for introspection and other
purposes (e.g. bypassing a caching decorator such as lru_cache()),
this function automatically adds a __wrapped__ attribute to the
wrapper that refers to the function being wrapped.
If you're writing your own decorators, as in your example, you can also use @wraps to get the same functionality (as well as keeping the docstrings, etc.).
See also Is there a way to get the function a decorator has wrapped?
|
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__?
|
I see patterns like
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from namedtuple?
|
EDIT
It seems that several people are concerned about presenting this solution, so I will provide a very clear disclaimer. You should not use this solution. I only provide it as information, so you know that the language is capable of this. The rest of the answer is just showing language capabilities, not endorsing using them in this way.
ORIGINAL ANSWER
There isn't really anything wrong with explicitly copying parameters into attributes. If you have too many parameters in the ctor, it is sometimes considered a code smell and maybe you should group these params into a fewer objects. Other times, it is necessary and there is nothing wrong with it. Anyway, doing it explicitly is the way to go.
However, since you are asking HOW it can be done (and not whether it should be done), then one solution is this:
class A:
def __init__(self, **kwargs):
for key in kwargs:
setattr(self, key, kwargs[key])
a = A(l=1, d=2)
a.l # will return 1
a.d # will return 2
|
Python keyword arguments unpack and return dictionary
|
I have a function definition as below and I am passing keyword arguments. How do I get to return a dictionary with the same name as the keyword arguments?
Manually I can do:
def generate_student_dict(first_name=None, last_name=None , birthday=None, gender =None):
return {
'first_name': first_name,
'last_name': last_name,
'birthday': birthday,
'gender': gender
}
But I don't want to do that. Is there any way that I can make this work without actually typing the dict?
def generate_student_dict(self, first_name=None, last_name=None, birthday=None, gender=None):
return # Packed value from keyword argument.
|
If that way is suitable for you, use kwargs (see Understanding kwargs in Python) as in code snippet below:
def generate_student_dict(self, **kwargs):
return kwargs
Otherwise, you can create a copy of params with built-in locals() at function start and return that copy:
def generate_student_dict(first_name=None, last_name=None , birthday=None, gender =None):
# It's important to copy locals in first line of code (see @MuhammadTahir comment).
args_passed = locals().copy()
# some code
return args_passed
generate_student_dict()
|
Python: ensure os.environ and sys.path are equal: web-requests, shell, cron, celery
|
I want to ensure that os.environ and sys.path are identical for all ways we start the Python interpreter:
web requests via Django, and Apache mod_wsgi
Cron jobs
Interactive logins via ssh
Celery jobs
Jobs started via systemd
Is there a common way to solve this?
If yes, great: How does it look like?
If no, sad: Everybody solves this on his own. ... What is a good way to solve this?
Operating System: Linux (with systemd support)
Update
More explicit:
I want sys.path to be the same in web requests, cron jobs, python started from shell, ...
I want os.environ to be the same in web requests, cron jobs, python started from shell, ...
Update2
For systemd we use EnvironmentFile
Update3
We use virtualenv
|
You can use envdir python port (here is the original) for managing the environment variables.
If you are only concerned about Django, I suggest using envdir from your settings.py programmatically
You can update the environment programmatically (e.g.: in the wsgi file, django's manage.py, settings.py, etc.)
import envdir
import os
# print os.environ['FOO'] # would raise a KeyError
path = '../envdir/prod'
if not os.path.isdir(path):
raise ValueError('%s is not a dir' % path)
envdir.Env(path)
print os.environ['FOO']
or you can run the your process through envdir on the command line, e.g.: envdir envs/prod/ python manage.py runserver
I suggest creating aliases for python, pip, etc. (as you don't want to overwrite the system's own python), e.g.: alias python-mycorp="envdir /abs/path/to/envs/prod/ python" (or if you prefer, write a full shell script instead of an alias).
|
Why is calling float() on a number slower than adding 0.0 in Python?
|
What is the reason that casting an integer to a float is slower than adding 0.0 to that int in Python?
import timeit
def add_simple():
for i in range(1000):
a = 1 + 0.0
def cast_simple():
for i in range(1000):
a = float(1)
def add_total():
total = 0
for i in range(1000):
total += 1 + 0.0
def cast_total():
total = 0
for i in range(1000):
total += float(1)
print "Add simple timing: %s" % timeit.timeit(add_simple, number=1)
print "Cast simple timing: %s" % timeit.timeit(cast_simple, number=1)
print "Add total timing: %s" % timeit.timeit(add_total, number=1)
print "Cast total timing: %s" % timeit.timeit(cast_total, number=1)
The output of which is:
Add simple timing: 0.0001220703125
Cast simple timing: 0.000469923019409
Add total timing: 0.000164985656738
Cast total timing: 0.00040078163147
|
If you use the dis module, you can start to see why:
In [11]: dis.dis(add_simple)
2 0 SETUP_LOOP 26 (to 29)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 GET_ITER
>> 13 FOR_ITER 12 (to 28)
16 STORE_FAST 0 (i)
3 19 LOAD_CONST 4 (1.0)
22 STORE_FAST 1 (a)
25 JUMP_ABSOLUTE 13
>> 28 POP_BLOCK
>> 29 LOAD_CONST 0 (None)
32 RETURN_VALUE
In [12]: dis.dis(cast_simple)
2 0 SETUP_LOOP 32 (to 35)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 GET_ITER
>> 13 FOR_ITER 18 (to 34)
16 STORE_FAST 0 (i)
3 19 LOAD_GLOBAL 1 (float)
22 LOAD_CONST 2 (1)
25 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
28 STORE_FAST 1 (a)
31 JUMP_ABSOLUTE 13
>> 34 POP_BLOCK
>> 35 LOAD_CONST 0 (None)
38 RETURN_VALUE
Note the CALL_FUNCTION
Function calls in Python are (relatively) slow. As are . lookups. Because casting to float requires a function call - that's why it's slower.
|
Can you create a Python list from a string, while keeping characters in specific keywords together?
|
I want to create a list from the characters in a string, but keep specific keywords together.
For example:
keywords: car, bus
INPUT:
"xyzcarbusabccar"
OUTPUT:
["x", "y", "z", "car", "bus", "a", "b", "c", "car"]
|
With re.findall. Alternate between your keywords first.
>>> import re
>>> s = "xyzcarbusabccar"
>>> re.findall('car|bus|[a-z]', s)
['x', 'y', 'z', 'car', 'bus', 'a', 'b', 'c', 'car']
In case you have overlapping keywords, note that this solution will find the first one you encounter:
>>> s = 'abcaratab'
>>> re.findall('car|rat|[a-z]', s)
['a', 'b', 'car', 'a', 't', 'a', 'b']
You can make the solution more general by substituting the [a-z] part with whatever you like, \w for example, or a simple . to match any character.
Short explanation why this works and why the regex '[a-z]|car|bus' would not work:
The regular expression engine tries the alternating options from left to right and is "eager" to return a match. That means it considers the whole alternation to match as soon as one of the options has been fully matched. At this point, it will not try any of the remaining options but stop processing and report a match immediately. With '[a-z]|car|bus', the engine will report a match when it sees any character in the character class [a-z] and never go on to check if 'car' or 'bus' could also be matched.
|
Print letters in specific pattern in Python
|
I have the follwing string and I split it:
>>> st = '%2g%k%3p'
>>> l = filter(None, st.split('%'))
>>> print l
['2g', 'k', '3p']
Now I want to print the g letter two times, the k letter one time and the p letter three times:
ggkppp
How is it possible?
|
You could use generator with isdigit() to check wheter your first symbol is digit or not and then return following string with appropriate count. Then you could use join to get your output:
''.join(i[1:]*int(i[0]) if i[0].isdigit() else i for i in l)
Demonstration:
In [70]: [i[1:]*int(i[0]) if i[0].isdigit() else i for i in l ]
Out[70]: ['gg', 'k', 'ppp']
In [71]: ''.join(i[1:]*int(i[0]) if i[0].isdigit() else i for i in l)
Out[71]: 'ggkppp'
EDIT
Using re module when first number is with several digits:
''.join(re.search('(\d+)(\w+)', i).group(2)*int(re.search('(\d+)(\w+)', i).group(1)) if re.search('(\d+)(\w+)', i) else i for i in l)
Example:
In [144]: l = ['12g', '2kd', 'h', '3p']
In [145]: ''.join(re.search('(\d+)(\w+)', i).group(2)*int(re.search('(\d+)(\w+)', i).group(1)) if re.search('(\d+)(\w+)', i) else i for i in l)
Out[145]: 'ggggggggggggkdkdhppp'
EDIT2
For your input like:
st = '%2g_%3k%3p'
You could replace _ with empty string and then add _ to the end if the work from list endswith the _ symbol:
st = '%2g_%3k%3p'
l = list(filter(None, st.split('%')))
''.join((re.search('(\d+)(\w+)', i).group(2)*int(re.search('(\d+)(\w+)', i).group(1))).replace("_", "") + '_' * i.endswith('_') if re.search('(\d+)(\w+)', i) else i for i in l)
Output:
'gg_kkkppp'
EDIT3
Solution without re module but with usual loops working for 2 digits. You could define functions:
def add_str(ind, st):
if not st.endswith('_'):
return st[ind:] * int(st[:ind])
else:
return st[ind:-1] * int(st[:ind]) + '_'
def collect(l):
final_str = ''
for i in l:
if i[0].isdigit():
if i[1].isdigit():
final_str += add_str(2, i)
else:
final_str += add_str(1, i)
else:
final_str += i
return final_str
And then use them as:
l = ['12g_', '3k', '3p']
print(collect(l))
gggggggggggg_kkkppp
|
What could cause NetworkX & PyGraphViz to work fine alone but not together?
|
I'm working to learning some Python graph visualization. I found a few blog posts doing some things I wanted to try. Unfortunately I didn't get too far, encountering this error: AttributeError: 'module' object has no attribute 'graphviz_layout'
The simplest snip of code which reproduces the error on my system is this,
In [1]: import networkx as nx
In [2]: G=nx.complete_graph(5)
In [3]: nx.draw_graphviz(G)
------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-481ad1c1771c> in <module>()
----> 1 nx.draw_graphviz(G)
/usr/lib/python2.7/site-packages/networkx/drawing/nx_pylab.pyc in draw_graphviz(G, prog, **kwargs)
982 See networkx.draw_networkx() for a description of optional keywords.
983 """
--> 984 pos = nx.drawing.graphviz_layout(G, prog)
985 draw(G, pos, **kwargs)
986
AttributeError: 'module' object has no attribute 'graphviz_layout'
I found a similar questions, and posts having difficulty with this combo, but not quite the same error. One was close, but it automagically resolved itself.
First, I verified all the required packages for NetworkX and PyGraphViz (which lists similar requirements to Scipy) were installed.
Next, I looked for snips to test my installation of these modules in Python. The first two examples are from the NetworkX Reference Documentation. This lists a few example snips using both MatPlotLib and GraphViz.
The MatPlotLib code example works for me (renders an image to the screen),
In [11]: import networkx as nx
In [12]: G=nx.complete_graph(5)
In [13]: import matplotlib.pyplot as plt
In [13]: nx.draw(G)
In [13]: plt.show()
However, the GraphViz snips also produce similar errors,
In [16]: import networkx as nx
In [17]: G=nx.complete_graph(5)
In [18]: H=nx.from_agraph(A)
------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-808fa68cefaa> in <module>()
----> 1 H=nx.from_agraph(A)
AttributeError: 'module' object has no attribute 'from_agraph'
In [19]: A=nx.to_agraph(G)
------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-19-32d1616bb41a> in <module>()
----> 1 A=nx.to_agraph(G)
AttributeError: 'module' object has no attribute 'to_agraph'
In [20]: print G
complete_graph(5)
Then I tried PyGraphViz's tutorial page on Layout & Drawing. This has some snips as well. PyGraphViz passed with Neato (default), PyDot, and Circo Post Script output (viewed using Gimp). (The only difference is these PyGraphViz examples are not rendered to the display, but to files).
In [1]: import pygraphviz as pgv
In [2]: d={'1': {'2': None}, '2': {'1': None, '3': None}, '3': {'2': None}}
In [3]: A=pgv.AGraph(d)
In [4]: A.write("pygraphviz_test_01.dot")
In [5]: A.layout()
In [6]: A.draw('pygraphviz_test_01.png')
Adding to the complexity, PyGraphViz requires GraphViz package binaries in order to work. I'm using Arch Linux, and installed that distro's version. Arch Linux has an example to test installation (again, output to file) which also passed.
What am I missing? What could cause NetworkX & PyGraphViz to work fine alone but not together?
|
There is a small bug in the draw_graphviz function in networkx-1.11 triggered by the change that the graphviz drawing tools are no longer imported into the top level namespace of networkx.
The following is a workaround
In [1]: import networkx as nx
In [2]: G = nx.complete_graph(5)
In [3]: from networkx.drawing.nx_agraph import graphviz_layout
In [4]: pos = graphviz_layout(G)
In [5]: nx.draw(G, pos)
To use the other functions such as to_agraph, write_dot, etc you will need to explicitly use the longer path name
nx.drawing.nx_agraph.write_dot()
or import the function into the top-level namespace
from networkx.drawing.nx_agraph import write_dot()
write_dot()
|
In python, how do I cast a class object to a dict
|
Let's say I've got a simple class in python
class Wharrgarbl(object):
def __init__(self, a, b, c, sum, version='old'):
self.a = a
self.b = b
self.c = c
self.sum = 6
self.version = version
def __int__(self):
return self.sum + 9000
def __what_goes_here__(self):
return {'a': self.a, 'b': self.b, 'c': self.c}
I can cast it to an integer very easily
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> int(w)
9006
Which is great! But, now I want to cast it to a dict in a similar fashion
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> dict(w)
{'a': 'one', 'c': 'three', 'b': 'two'}
What do I need to define for this to work? I tried substituting both __dict__ and dict for __what_goes_here__, but dict(w) resulted in a TypeError: Wharrgarbl object is not iterable in both cases. I don't think simply making the class iterable will solve the problem. I also attempted many googles with as many different wordings of "python cast object to dict" as I could think of but couldn't find anything relevant :{
Also! Notice how calling w.__dict__ won't do what I want because it's going to contain w.version and w.sum. I want to customize the cast to dict in the same way that I can customize the cast to int by using def int(self).
I know that I could just do something like this
>>> w.__what_goes_here__()
{'a': 'one', 'c': 'three', 'b': 'two'}
But I am assuming there is a pythonic way to make dict(w) work since it is the same type of thing as int(w) or str(w). If there isn't a more pythonic way, that's fine too, just figured I'd ask. Oh! I guess since it matters, this is for python 2.7, but super bonus points for a 2.4 old and busted solution as well.
There is another question Overloading __dict__() on python class that is similar to this one but may be different enough to warrant this not being a duplicate. I believe that OP is asking how to cast all the data in his class objects as dictionaries. I'm looking for a more customized approach in that I don't want everything in __dict__ included in the dictionary returned by dict(). Something like public vs private variables may suffice to explain what I'm looking for. The objects will be storing some values used in calculations and such that I don't need/want to show up in the resulting dictionaries.
UPDATE:
I've chosen to go with the asdict route suggested but it was a tough choice selecting what I wanted to be the answer to the question. Both @RickTeachey and @jpmc26 provided the answer I'm going to roll with but the former had more info and options and landed on the same result as well and was upvoted more so I went with it. Upvotes all around though and thanks for the help. I've lurked long and hard on stackoverflow and I'm trying to get my toes in the water more.
|
You need to override ' __iter__'.
Like this, for example:
def __iter__(self):
yield 'a', self.a
yield 'b', self.b
yield 'c', self.c
Now you can just do:
dict(my_object)
I would also suggest looking into the 'collections.abc' module. This answer might be helpful:
http://stackoverflow.com/a/27803404/2437514
Specifically, you'll want to look at the 'Mapping' and 'MutableMapping' objects. If you use that module and inherit your object from one of the dict-like abcs, you can cast your object to a dict just as you require.
As noted in the comments below: it's worth mentioning that doing this the abc way essentially turns your object class into a dict-like class. So everything you would be able to do with dict, you could do with your own class object. This may be, or may not be, desirable. It would also mean there would probably be little reason (because of duck typing) to bother casting your object to a dict in the first place.
Also consider looking at the numerical abcs in the numbers module:
https://docs.python.org/3/library/numbers.html
Since you're also casting your object to an int, it might make more sense to essentially turn your class into a full fledged int so that casting isn't necessary.
However, after thinking about this a bit more, I would very much consider the asdict way of doing things suggested by other answers. It does not appear that your object is really much of a collection. Using the iter or abc method could be confusing for others unless it is very obvious exactly which object members would and would not be iterated.
|
"freeze" some variables/scopes in tensorflow: stop_gradient vs passing variables to minimize
|
I am trying to implement Adversarial NN, which requires to 'freeze' one or the other part of the graph during alternating training minibatches. I.e. there two sub-networks: G and D.
G( Z ) -> Xz
D( X ) -> Y
where loss function of G depends on D[G(Z)], D[X].
First I need to train parameters in D with all G parameters fixed, and then parameters in G with parameters in D fixed. Loss function in first case will be negative loss function in the second case and the update will have to apply to the parameters of whether first or second subnetwork.
I saw that tensorflow has tf.stop_gradient function. For purpose of training the D (downstream) subnetwork I can use this function to block the gradient flow to
Z -> [ G ] -> tf.stop_gradient(Xz) -> [ D ] -> Y
The tf.stop_gradient is very succinctly annotated with no in-line example (and example seq2seq.py is too long and not that easy to read), but looks like it must be called during the graph creation. Does it imply that if I want to block/unblock gradient flow in alternating batches, I need to re-create and re-initialize the graph model?
Also it seems that one cannot block the gradient flowing through the G (upstream) network by means of tf.stop_gradient, right?
As an alternative I saw that one can pass the list of variables to the optimizer call as opt_op = opt.minimize(cost, <list of variables>), which would be an easy solution if one could get all variables in the scopes of each subnetwork. Can one get a <list of variables> for a tf.scope?
|
The easiest way to achieve this, as you mention in your question, is to create two optimizer operations using separate calls to opt.minimize(cost, ...). By default, the optimizer will use all of the variables in tf.trainable_variables(). If you want to filter the variables to a particular scope, you can use the optional scope argument to tf.get_collection() as follows:
optimizer = tf.train.AdagradOptimzer(0.01)
first_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/first/vars")
first_train_op = optimizer.minimize(cost, var_list=first_train_vars)
second_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/second/vars")
second_train_op = optimizer.minimize(cost, var_list=second_train_vars)
|
How to sort an array of integers faster than quicksort?
|
Sorting an array of integers with numpy's quicksort has become the
bottleneck of my algorithm. Unfortunately, numpy does not have
radix sort yet.
Although counting sort would be a one-liner in numpy:
np.repeat(np.arange(1+x.max()), np.bincount(x))
see the accepted answer to the How can I vectorize this python count sort so it is absolutely as fast as it can be? question, the integers
in my application can run from 0 to 2**32.
Am I stuck with quicksort?
This post was primarily motivated by the
Numpy grouping using itertools.groupby performance
question.
Also note that
it is not merely OK to ask and answer your own question, it is explicitly encouraged.
|
No, you are not stuck with quicksort. You could use, for example,
integer_sort from
Boost.Sort
or u4_sort from usort. When sorting this array:
array(randint(0, high=1<<32, size=10**8), uint32)
I get the following results:
NumPy quicksort: 8.636 s 1.0 (baseline)
Boost.Sort integer_sort: 4.327 s 2.0x speedup
usort u4_sort: 2.065 s 4.2x speedup
I would not jump to conclusions based on this single experiment and use
usort blindly. I would test with my actual data and measure what happens.
Your mileage will vary depending on your data and on your machine. The
integer_sort in Boost.Sort has a rich set of options for tuning, see the
documentation.
Below I describe two ways to call a native C or C++ function from Python. Despite the long description, it's fairly easy to do it.
Boost.Sort
Put these lines into the spreadsort.cpp file:
#include <cinttypes>
#include "boost/sort/spreadsort/spreadsort.hpp"
using namespace boost::sort::spreadsort;
extern "C" {
void spreadsort(std::uint32_t* begin, std::size_t len) {
integer_sort(begin, begin + len);
}
}
It basically instantiates the templated integer_sort for 32 bit
unsigned integers; the extern "C" part ensures C linkage by disabling
name mangling.
Assuming you are using gcc and that the necessary include files of boost
are under the /tmp/boost_1_60_0 directory, you can compile it:
g++ -O3 -std=c++11 -march=native -DNDEBUG -shared -fPIC -I/tmp/boost_1_60_0 spreadsort.cpp -o spreadsort.so
The key flags are -fPIC to generate
position-independet code
and -shared to generate a
shared object
.so file. (Read the docs of gcc for further details.)
Then, you wrap the spreadsort() C++ function
in Python using ctypes:
from ctypes import cdll, c_size_t, c_uint32
from numpy import uint32
from numpy.ctypeslib import ndpointer
__all__ = ['integer_sort']
# In spreadsort.cpp: void spreadsort(std::uint32_t* begin, std::size_t len)
lib = cdll.LoadLibrary('./spreadsort.so')
sort = lib.spreadsort
sort.restype = None
sort.argtypes = [ndpointer(c_uint32, flags='C_CONTIGUOUS'), c_size_t]
def integer_sort(arr):
assert arr.dtype == uint32, 'Expected uint32, got {}'.format(arr.dtype)
sort(arr, arr.size)
Alternatively, you can use cffi:
from cffi import FFI
from numpy import uint32
__all__ = ['integer_sort']
ffi = FFI()
ffi.cdef('void spreadsort(uint32_t* begin, size_t len);')
C = ffi.dlopen('./spreadsort.so')
def integer_sort(arr):
assert arr.dtype == uint32, 'Expected uint32, got {}'.format(arr.dtype)
begin = ffi.cast('uint32_t*', arr.ctypes.data)
C.spreadsort(begin, arr.size)
At the cdll.LoadLibrary() and ffi.dlopen() calls I assumed that the
path to the spreadsort.so file is ./spreadsort.so. Alternatively,
you can write
lib = cdll.LoadLibrary('spreadsort.so')
or
C = ffi.dlopen('spreadsort.so')
if you append the path to spreadsort.so to the LD_LIBRARY_PATH environment
variable. See also Shared Libraries.
Usage. In both cases you simply call the above Python wrapper function integer_sort()
with your numpy array of 32 bit unsigned integers.
usort
As for u4_sort, you can compile it as follows:
cc -DBUILDING_u4_sort -I/usr/include -I./ -I../ -I../../ -I../../../ -I../../../../ -std=c99 -fgnu89-inline -O3 -g -fPIC -shared -march=native u4_sort.c -o u4_sort.so
Issue this command in the directory where the u4_sort.c file is located.
(Probably there is a less hackish way but I failed to figure that out. I
just looked into the deps.mk file in the usort directory to find out
the necessary compiler flags and include paths.)
Then, you can wrap the C function as follows:
from cffi import FFI
from numpy import uint32
__all__ = ['integer_sort']
ffi = FFI()
ffi.cdef('void u4_sort(unsigned* a, const long sz);')
C = ffi.dlopen('u4_sort.so')
def integer_sort(arr):
assert arr.dtype == uint32, 'Expected uint32, got {}'.format(arr.dtype)
begin = ffi.cast('unsigned*', arr.ctypes.data)
C.u4_sort(begin, arr.size)
In the above code, I assumed that the path to u4_sort.so has been
appended to the LD_LIBRARY_PATH environment variable.
Usage. As before with Boost.Sort, you simply call the above Python wrapper function integer_sort() with your numpy array of 32 bit unsigned integers.
|
Find elements that occur in some but not all lists
|
Suppose I have several lists of integers like so:
[0,3,4]
[2,3,4,7]
[2,3,4,6]
What's the most efficient / most pythonic way to build a single list of all elements that occur in at least one list but do not occur in all lists? In this case it would be
[0,2,7,6]
|
The answer is implied in your question .. if you substitute "set" for "lists". As StephenTG posted, simply get the difference between the union and the intersection of all lists.
The advantage of using sets over Counter is that you need make no assumptions about values appearing only once in each list.
The following works regardless of how many lists you have:
> list_of_sets = [set(l) for l in lists]
> set.union(*list_of_sets) - set.intersection(*list_of_sets)
{0, 2, 6, 7}
|
Keep the order of list in sql pagination
|
I have a list with an order of insertion. I want to paginate the results using the same order. As you can see currently the output will be a different order.
following_companies_list_data = Company.query.filter(Company.id.in_(['2', '24', '1', '7', '373'])).paginate(
page, per_page=10, error_out=False)
companies = following_companies_list_data.items
for i in companies:
print i.id
7
24
373
2
1
related question
|
Solution based on this answer from related question
company_ids = ['2', '24', '1', '7', '373']
order_expressions = [(Company.id==i).desc() for i in company_ids]
query = Company.query.filter(Company.id.in_(company_ids)).order_by(*order_expressions)
following_companies_list_data = query.paginate(page, per_page=10, error_out=False)
Also you can use idx function from intarray module
from sqlalchemy import func
company_ids = ['2', '24', '1', '7', '373']
query = Company.query.filter(Company.id.in_(company_ids)).order_by(func.idx(company_ids, Company.id))
following_companies_list_data = query.paginate(page, per_page=10, error_out=False)
|
Python list order
|
In the small script I wrote, the .append() function adds the entered item to the beginning of the list, instead of the end of that list. (As you can clearly understand, am quite new to the Python, so go easy on me)
list.append(x)
Add an item to the end of the list; equivalent to a[len(a):] = [x].
That's what is says in https://docs.python.org/2/tutorial/datastructures.html.
You can see my code below:
user_input = []
def getting_text(entered_text):
if entered_text == "done":
print "entering the texts are done!"
else:
getting_text(raw_input("Enter the text or write done to finish entering "))
user_input.append(entered_text)
getting_text(raw_input("Enter the first text "))
print user_input
Am I misunderstanding something here, because the print function prints c,b,a instead of a,b,c (the order I entered the input is a,b,c)
|
Ok, this is what's happening.
When your text isn't "done", you've programmed it so that you immediately call the function again (i.e, recursively call it). Notice how you've actually set it to append an item to the list AFTER you do the getting_text(raw_input("Enter the text or write done to finish entering ")) line.
So basically, when you add your variables, it's going to add all of the variables AFTER it's done with the recursive function.
Hence, when you type a, then it calls the function again (hasn't inputted anything to the list yet). Then you type b, then c. When you type done, the recursive bit is finished. NOW, it does user_input.append(.... HOWEVER, the order is reversed because it deals with c first since that was the latest thing.
This can be shown when you print the list inside the function:
>>> def getting_text(entered_text):
... print user_input
... if entered_text == "done":
... print "entering the texts are done!"
... else:
... getting_text(raw_input("Enter the text or write done to finish entering "))
... user_input.append(entered_text)
...
>>>
>>> getting_text(raw_input("Enter the first text "))
Enter the first text a
[]
Enter the text or write done to finish entering b
[]
Enter the text or write done to finish entering c
[]
Enter the text or write done to finish entering done
[]
entering the texts are done!
>>> user_input
['c', 'b', 'a']
Note the print statement line 2.
So how do you fix this? Simple: append to the list before you recursively call.
>>> user_input = []
>>> def getting_text(entered_text):
... if entered_text == "done":
... print "entering the texts are done!"
... else:
... user_input.append(entered_text)
... getting_text(raw_input("Enter the text or write done to finish entering "))
...
>>> user_input = []
>>> getting_text(raw_input("Enter the first text "))
Enter the first text a
Enter the text or write done to finish entering b
Enter the text or write done to finish entering c
Enter the text or write done to finish entering done
entering the texts are done!
>>> user_input
['a', 'b', 'c']
|
How do I search a list that is in a nested list (list of list) without loop in Python?
|
I am perfectly aware of that..
sample=[[1,[1,0]],[1,1]]
[1,[1,0]] in sample
This will return True.
But what I want to do here is this.
sample=[[1,[1,0]],[1,1]]
[1,0] in sample
I want the return to be True, but this returns False.
I can do this:
sample=[[1,[1,0]],[1,1]]
for i in range(len(sample)):
[1,0] in sample[i]
But I am wondering if there is any better or efficient way of doing it.
|
you can use chain from itertools to merge the lists and then search in the returned list.
>>> sample=[[1,[1,0]],[1,1]]
>>> from itertools import chain
>>> print [1,0] in chain(*sample)
True
|
Why isn't this alternative to the deprecated Factory.set_creation_function working with nosetests?
|
Factory Boy deprecated set_creation_function (see ChangeLog 2.6.1) and recommends that developers
Replace factory.set_creation_function(SomeFactory, creation_function)
with an override of the _create() method of SomeFactory
I have i) a number of derivative factory classes and ii) my db session instantiated in another module so I tried replacing the working example from https://github.com/mattupstate/overholt with the second code block below. PyCharm is warning me that the "db" import is not being used, so I suspect it might not be being dereferenced properly when I set sqlalchemy_session?
Working with nosetests 1.3.7 (but FactoryBoy's set_creation_function is now deprecated):
from myapp.core import db
def create_sqlalchemy_model_function(class_to_create, *args, **kwargs):
entity = class_to_create(**kwargs)
db.session.add(entity)
db.session.commit()
return entity
Factory.set_creation_function(create_sqlalchemy_model_function)
Not working with nosetests 2.x (looking like db is not being referenced properly?)
from factory.alchemy import SQLAlchemyModelFactory as Factory
from myapp.core import db
class Factory():
class Meta:
sqlalchemy_session = db.session
def _create(cls, model_class, *args, **kwargs):
entity = model_class(*args, **kwargs)
db.session.add(entity)
db.session.commit()
return entity
|
Two major issues with your sample not-working code:
the class should be derived from SQLAlchemyModelFactory class
the _create() method should be defined as classmethod
Fixed version:
from factory.alchemy import SQLAlchemyModelFactory as Factory
from myapp.core import db
class MyFactory(Factory):
class Meta:
sqlalchemy_session = db.session
@classmethod
def _create(cls, model_class, *args, **kwargs):
entity = model_class(*args, **kwargs)
db.session.add(entity)
db.session.commit()
return entity
Here is also a sample of the model factory overriding the _create() method.
|
Correctly extract Emojis from a Unicode string
|
I am working in Python 2 and I have a string containing emojis as well as other unicode characters. I need to convert it to a list where each entry in the list is a single character/emoji.
x = u'í ½í¸í ½í¸xyzí ½í¸í ½í¸'
char_list = [c for c in x]
The desired output is:
['í ½í¸', 'í ½í¸', 'x', 'y', 'z', 'í ½í¸', 'í ½í¸']
The actual output is:
[u'\ud83d', u'\ude18', u'\ud83d', u'\ude18', u'x', u'y', u'z', u'\ud83d', u'\ude0a', u'\ud83d', u'\ude0a']
How can I achieve the desired output?
|
First of all, in Python2, you need to use Unicode strings (u'<...>') for Unicode characters to be seen as Unicode characters. And correct source encoding if you want to use the chars themselves rather than the \UXXXXXXXX representation in source code.
Now, as per Python: getting correct string length when it contains surrogate pairs and Python returns length of 2 for single Unicode character string, in Python2 "narrow" builds (with sys.maxunicode==65535), 32-bit Unicode characters are represented as surrogate pairs, and this is not transparent to string functions. This has only been fixed in 3.3 (PEP0393).
The simplest resolution (save for migrating to 3.3+) is to compile a Python "wide" build from source as outlined on the 3rd link. In it, Unicode characters are all 4-byte (thus are a potential memory hog) but if you need to routinely handle wide Unicode chars, this is probably an acceptable price.
The solution for a "narrow" build is to make a custom set of string functions (len, slice; maybe as a subclass of unicode) that would detect surrogate pairs and handle them as a single character. I couldn't readily find an existing one (which is strange), but it's not too hard to write:
as per UTF-16#U+10000 to U+10FFFF - Wikipedia,
the 1st character (high surrogate) is in range 0xD800..0xDBFF
the 2nd character (low surrogate) - in range 0xDC00..0xDFFF
these ranges are reserved and thus cannot occur as regular characters
So here's the code to detect a surrogate pair:
def is_surrogate(s,i):
if 0xD800 <= ord(s[i]) <= 0xDBFF:
try:
l = s[i+1]
except IndexError:
return False
if 0xDC00 <= ord(l) <= 0xDFFF:
return True
else:
raise ValueError("Illegal UTF-16 sequence: %r" % s[i:i+2])
else:
return False
And a function that returns a simple slice:
def slice(s,start,end):
l=len(s)
i=0
while i<start and i<l:
if is_surrogate(s,i):
start+=1
end+=1
i+=1
i+=1
while i<end and i<l:
if is_surrogate(s,i):
end+=1
i+=1
i+=1
return s[start:end]
Here, the price you pay is performance, as these functions are much slower than built-ins:
>>> ux=u"a"*5000+u"\U00100000"*30000+u"b"*50000
>>> timeit.timeit('slice(ux,10000,100000)','from __main__ import slice,ux',number=1000)
46.44128203392029 #msec
>>> timeit.timeit('ux[10000:100000]','from __main__ import slice,ux',number=1000000)
8.814016103744507 #usec
|
Get the number of all keys in a dictionary of dictionaries in Python
|
I have a dictionary of dictionaries in Python 2.7.
I need to quickly count the number of all keys, including the keys within each of the dictionaries.
So in this example I would need the number of all keys to be 6:
dict_test = {'key2': {'key_in3': 'value', 'key_in4': 'value'}, 'key1': {'key_in2': 'value', 'key_in1': 'value'}}
I know I can iterate through each key with for loops, but I am looking for a quicker way to do this, since I will have thousands/millions of keys and doing this is just ineffective:
count_the_keys = 0
for key in dict_test.keys():
for key_inner in dict_test[key].keys():
count_the_keys += 1
# something like this would be more effective
# of course .keys().keys() doesn't work
print len(dict_test.keys()) * len(dict_test.keys().keys())
|
Keeping it Simple
If we know all the values are dictionaries, and do not wish to check that any of their values are also dictionaries, then it is as simple as:
len(dict_test) + sum(len(v) for v in dict_test.itervalues())
Refining it a little, to actually check that the values are dictionaries before counting them:
len(dict_test) + sum(len(v) for v in dict_test.itervalues() if isinstance(v, dict))
And finally, if you wish to do an arbitrary depth, something like the following:
def sum_keys(d):
return (0 if not isinstance(d, dict)
else len(d) + sum(sum_keys(v) for v in d.itervalues())
print sum_keys({'key2': {'key_in3': 'value', 'key_in4': 'value'},
'key1': {'key_in2': 'value',
'key_in1': dict(a=2)}})
# => 7
In this last case, we define a function that will be called recursively. Given a value d, we return either:
0 if that value is not a dictionary; or
the number of keys in the dictionary, plus the total of keys in all of our children.
Making it Faster
The above is a succinct and easily understood approach. We can get a little faster using a generator:
def _counter(d):
# how many keys do we have?
yield len(d)
# stream the key counts of our children
for v in d.itervalues():
if isinstance(v, dict):
for x in _counter(v):
yield x
def count_faster(d):
return sum(_counter(d))
This gets us a bit more performance:
In [1]: %timeit sum_keys(dict_test)
100000 loops, best of 3: 4.12 µs per loop
In [2]: %timeit count_faster(dict_test)
100000 loops, best of 3: 3.29 µs per loop
|
What can `__init__` do that `__new__` cannot?
|
In Python, __new__ is used to initialize immutable types and __init__ typically initializes mutable types. If __init__ were removed from the language, what could no longer be done (easily)?
For example,
class A:
def __init__(self, *, x, **kwargs):
super().__init__(**kwargs)
self.x = x
class B(A):
def __init__(self, y=2, **kwargs):
super().__init__(**kwargs)
self.y = y
Could be rewritten using __new__ like this:
class A_N:
def __new__(cls, *, x, **kwargs):
obj = super().__new__(cls, **kwargs)
obj.x = x
return obj
class B_N(A_N):
def __new__(cls, y=2, **kwargs):
obj = super().__new__(cls, **kwargs)
obj.y = y
return obj
Clarification for scope of question: This is not a question about how __init__ and __new__ are used or what is the difference between them. This is a question about what would happen if __init__ were removed from the language. Would anything break? Would anything become a lot harder or impossible to do?
|
Note about difference between __new__ and __init__
Before explaining missing functionality let's get back to definition of __new__ and __init__:
__new__ is the first step of instance creation. It's called first, and is responsible for returning a new instance of your class.
However, __init__ doesn't return anything; it's only responsible for
initializing the instance after it's been created.
Consequences of replacing __init__ to __new__
Mainly you would loose in flexability. You would get a lot of semantics headaches and loose separation of initializatin and construction (by joining __new__ andinit we are to joining construction and initialization into one step...).
Let's take a look on snippet below:
class A(object):
some_property = 'some_value'
def __new__(cls, *args, **kwargs):
obj = object.__new__(cls, *args, **kwargs)
obj.some_property = cls.some_property
return obj
class B(A):
some_property = 2
def __new__(cls, *args, **kwargs):
obj = super(B, cls).__new__(cls)
return obj
Consequences of moving __init__ actions into __new__:
Initialize B before A: When you are using __new__ method instead of __init__ your first step of creating new instance of B is calling A.__new__ as side effect you cannot initialize B before A is initialized ( access and assign some properties to new B instance). Using of __init__ gives you such flexability.
Loose control on initializing order: let's imagine that you have B_N inherited from two classes (A_N1, A_N2), now you would miss controlling of order of initializing new instance of B_N(what is the order you are going to initialize instances ? it could be matter... what is weird.)
Properties and methods mess: you would miss access to A.some_property (cls would be equal to B while instantiating new instance of B. However directly accessing of A.some_property is possible, but my guess it's at least weird to access properties within class throught class name and not by using classmethods).
You cannot re-initialize an existed instance without creating new one or implementation special logic for this ( thanks to @platinhom for idea )
What can __init__ do that __new__ cannot?
There are no actions that cannot be done in __new__ and can in __init__, because actions that __init__ performs is a subset of the actions that can be performed by __new__.
An interesting moment from Python Docs, Pickling and unpickling normal class instances#object.getinitargs regarding when __init__ could be usefull:
When a pickled class instance is unpickled, its init() method is normally not invoked.
|
Setting up the EB CLI - error nonetype get_frozen_credentials
|
Select a default region
1) us-east-1 : US East (N. Virginia)
2) us-west-1 : US West (N. California)
3) us-west-2 : US West (Oregon)
4) eu-west-1 : EU (Ireland)
5) eu-central-1 : EU (Frankfurt)
6) ap-southeast-1 : Asia Pacific (Singapore)
7) ap-southeast-2 : Asia Pacific (Sydney)
8) ap-northeast-1 : Asia Pacific (Tokyo)
9) ap-northeast-2 : Asia Pacific (Seoul)
10) sa-east-1 : South America (Sao Paulo)
11) cn-north-1 : China (Beijing)
(default is 3):5
When I choose a number or just leave it blank.. the following error appears:
ERROR: AttributeError :: 'NoneType' object has no attribute
'get_frozen_credentials'
after running eb init --debug:
Traceback (most recent call last): File "/usr/local/bin/eb", line 11,
in
sys.exit(main()) File "/Library/Python/2.7/site-packages/ebcli/core/ebcore.py", line 149, in
main
app.run() File "/Library/Python/2.7/site-packages/cement/core/foundation.py", line
694, in run
self.controller._dispatch()
File "/Library/Python/2.7/site-packages/cement/core/controller.py", line
455, in _dispatch
return func()
File "/Library/Python/2.7/site-packages/cement/core/controller.py", line
461, in _dispatch
return func()
File "/Library/Python/2.7/site-packages/ebcli/core/abstractcontroller.py",
line 57, in default
self.do_command()
File "/Library/Python/2.7/site-packages/ebcli/controllers/initialize.py",
line 67, in do_command
self.set_up_credentials()
File "/Library/Python/2.7/site-packages/ebcli/controllers/initialize.py",
line 152, in set_up_credentials
if not initializeops.credentials_are_valid():
File "/Library/Python/2.7/site-packages/ebcli/operations/initializeops.py",
line 24, in credentials_are_valid
elasticbeanstalk.get_available_solution_stacks()
File "/Library/Python/2.7/site-packages/ebcli/lib/elasticbeanstalk.py",
line 239, in get_available_solution_stacks
result = _make_api_call('list_available_solution_stacks')
File "/Library/Python/2.7/site-packages/ebcli/lib/elasticbeanstalk.py",
line 37, in _make_api_call
**operation_options)
File "/Library/Python/2.7/site-packages/ebcli/lib/aws.py", line 207, in make_api_call
response_data = operation(**operation_options)
File "/Library/Python/2.7/site-packages/botocore/client.py", line 310, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/client.py", line 396, in _make_api_call
operation_model, request_dict)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 111, in make_request
return self._send_request(request_dict, operation_model)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 136, in _send_request
request = self.create_request(request_dict, operation_model)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 120, in create_request
operation_name=operation_model.name)
File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/Library/Python/2.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/Library/Python/2.7/site-packages/botocore/signers.py", line 123, in sign
signature_version)
File "/Library/Python/2.7/site-packages/botocore/signers.py", line 153, in get_auth_instance
kwargs['credentials'] = self._credentials.get_frozen_credentials()
AttributeError: 'NoneType' object has no attribute 'get_frozen_credentials'
|
You got this error because you didn't initialize your AWS Access Key ID and AWS Secret Access Key
you should install first awscli by runing pip install awscli.
After you need to configure aws:
aws configure
After this you can run eb init
|
Can I use index information inside the map function?
|
Let's assume there is a list a = [1, 3, 5, 6, 8].
I want to apply some transformation on that list and I want to avoid doing it sequentially, so something like map(someTransformationFunction, a) would normally do the trick, but what if the transformation needs to have knowledge of the index of each object?
For example let's say that each element must be multiplied by its position. So the list should be transformed to a = [0, 3, 10, 18, 32].
Is there a way to do that?
|
Use the enumerate() function to add indices:
map(function, enumerate(a))
Your function will be passed a tuple, with (index, value). In Python 2, you can specify that Python unpack the tuple for you in the function signature:
map(lambda (i, el): i * el, enumerate(a))
Note the (i, el) tuple in the lambda argument specification. You can do the same in a def statement:
def mapfunction((i, el)):
return i * el
map(mapfunction, enumerate(a))
To make way for other function signature features such as annotations, tuple unpacking in function arguments has been removed from Python 3.
Demo:
>>> a = [1, 3, 5, 6, 8]
>>> def mapfunction((i, el)):
... return i * el
...
>>> map(lambda (i, el): i * el, enumerate(a))
[0, 3, 10, 18, 32]
>>> map(mapfunction, enumerate(a))
[0, 3, 10, 18, 32]
|
Break statement in finally block swallows exception
|
Consider:
def raiseMe( text="Test error" ):
raise Exception( text )
def break_in_finally_test():
for i in range(5):
if i==2:
try:
raiseMe()
except:
raise
else:
print "succeeded!"
finally:
print "testing this!"
break
if __name__=='__main__':
break_in_finally_test()
I expected to see Exception( "Test error" ) to be raised, but instead only "testing this" is printed. The intention, of course, was to call raiseMe() only once, no matter if we succeed or not - but if it raises an exception, I would have wanted to see that!
Why does break swallow the exception that I explicitly raise?
|
From https://docs.python.org/2.7/reference/compound_stmts.html#finally:
If finally is present, it specifies a âcleanupâ handler. The try clause is executed, including any except and else clauses. If an exception occurs in any of the clauses and is not handled, the exception is temporarily saved. The finally clause is executed. If there is a saved exception, it is re-raised at the end of the finally clause. If the finally clause raises another exception or executes a return or break statement, the saved exception is discarded
This also reflects the behaviour expected from the try...finally statement before PEP341:
This is how a try except finally block looked like pre PEP341:
try:
try:
raiseMe()
except:
raise
finally:
#here is where cleanup is supposed to happen before raising error
break
#after finally code: raise error
As the raising of errors never happens in the finally block it is never actually raised.
To maintain backwards compatability with python versions 2.4 and less it had to be done in this way.
|
numpy array set ones between two values, fast
|
having been looking for solution for this problem for a while but can't seem to find anything.
For example, I have an numpy array of
[ 0, 0, 2, 3, 2, 4, 3, 4, 0, 0, -2, -1, -4, -2, -1, -3, -4, 0, 2, 3, -2, -1, 0]
what I would like to achieve is the generate another array to indicate the elements between a pair of numbers, let's say between 2 and -2 here. So I want to get an array like this
[ 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0]
Notice any 2 or -2 between a pair of (2, -2) are ignored. Any easy approach is to iterate through each element with for loop and identifies first occurrence of 2 and set everything after that to 1 until you hit an -2 and start look for the next 2 again.
But I would like this process to be faster as I have over 1000 elements in an numpy array. and this process needs to be done a lot of times. Do you guys know any elegant way to solve this? Thanks in advance!
|
Quite a problem that is! Listed in this post is a vectorized solution (hopefully the inlined comments would help to explain the logic behind it). I am assuming A as the input array with T1, T2 as the start and stop triggers.
def setones_between_triggers(A,T1,T2):
# Get start and stop indices corresponding to rising and falling triggers
start = np.where(A==T1)[0]
stop = np.where(A==T2)[0]
# Take care of boundary conditions for np.searchsorted to work
if (stop[-1] < start[-1]) & (start[-1] != A.size-1):
stop = np.append(stop,A.size-1)
# This is where the magic happens.
# Validate (filter out) the triggers based on the set conditions :
# 1. See if there are more than one stop indices between two start indices.
# If so, use the first one and rejecting all others in that in-between space.
# 2. Repeat the same check for start, but use the validated start indices.
# First off, take care of out-of-bound cases for proper indexing
stop_valid_idx = np.unique(np.searchsorted(stop,start,'right'))
stop_valid_idx = stop_valid_idx[stop_valid_idx < stop.size]
stop_valid = stop[stop_valid_idx]
_,idx = np.unique(np.searchsorted(stop_valid,start,'left'),return_index=True)
start_valid = start[idx]
# Create shifts array (array filled with zeros, unless triggered by T1 and T2
# for which we have +1 and -1 as triggers).
shifts = np.zeros(A.size,dtype=int)
shifts[start_valid] = 1
shifts[stop_valid] = -1
# Perform cumm. summation that would almost give us the desired output
out = shifts.cumsum()
# For a worst case when we have two groups of (T1,T2) adjacent to each other,
# set the negative trigger position as 1 as well
out[stop_valid] = 1
return out
Sample runs
Original sample case :
In [1589]: A
Out[1589]:
array([ 0, 0, 2, 3, 2, 4, 3, 4, 0, 0, -2, -1, -4, -2, -1, -3, -4,
0, 2, 3, -2, -1, 0])
In [1590]: setones_between_triggers(A,2,-2)
Out[1590]: array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0])
Worst case #1 (adjacent (2,-2) groups) :
In [1595]: A
Out[1595]:
array([-2, 2, 0, 2, -2, 2, 2, 2, 4, -2, 0, -2, -2, -4, -2, -1, 2,
-4, 0, 2, 3, -2, -2, 0])
In [1596]: setones_between_triggers(A,2,-2)
Out[1596]:
array([0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0,
0], dtype=int32)
Worst case #2 (2 without any -2 till end) :
In [1603]: A
Out[1603]:
array([-2, 2, 0, 2, -2, 2, 2, 2, 4, -2, 0, -2, -2, -4, -2, -1, -2,
-4, 0, 2, 3, 5, 6, 0])
In [1604]: setones_between_triggers(A,2,-2)
Out[1604]:
array([0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
1], dtype=int32)
|
Celery (Redis) results backend not working
|
I have a web application using Django and i am using Celery for some asynchronous tasks processing.
For Celery, i am using Rabbitmq as a broker, and Redis as a result backend.
Rabbitmq and Redis are running on the same Ubuntu 14.04 server hosted on a local virtual machine.
Celery workers are running on remote machines (Windows 10) (no worker are running on the Django server).
i have three issues (i think they are related somehow !).
The tasks stay in the 'PENDING' state no matter if the tasks are succeeded or failed.
the tasks doesn't retry when failed. and i get this error when trying to retry :
reject requeue=False: [WinError 10061] No connection could be made
because the target machine actively refused it
The results backend doesn't seems to work.
i am also confused about my settings, and i don't know exactly where this issues might come from !
so here is my settings so far:
my_app/settings.py
# region Celery Settings
CELERY_CONCURRENCY = 1
CELERY_ACCEPT_CONTENT = ['json']
# CELERY_RESULT_BACKEND = 'redis://:C@pV@lue2016@cvc.ma:6379/0'
BROKER_URL = 'amqp://soufiaane:C@pV@lue2016@cvc.ma:5672/cvcHost'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1
CELERY_REDIS_HOST = 'cvc.ma'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_RESULT_BACKEND = 'redis'
CELERY_RESULT_PASSWORD = "C@pV@lue2016"
REDIS_CONNECT_RETRY = True
AMQP_SERVER = "cvc.ma"
AMQP_PORT = 5672
AMQP_USER = "soufiaane"
AMQP_PASSWORD = "C@pV@lue2016"
AMQP_VHOST = "/cvcHost"
CELERYD_HIJACK_ROOT_LOGGER = True
CELERY_HIJACK_ROOT_LOGGER = True
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
# endregion
my_app/celery_settings.py
from __future__ import absolute_import
from django.conf import settings
from celery import Celery
import django
import os
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
django.setup()
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@lue2016@cvc.ma/cvcHost', backend='redis://:C@pV@lue2016@cvc.ma:6379/0')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
my_app__init__.py
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery_settings import app as celery_app
my_app\email\tasks.py
from __future__ import absolute_import
from my_app.celery_settings import app
# here i only define the task skeleton because i'm executing this task on remote workers !
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
try:
print("x")
except Exception as exc:
self.retry(exc=exc)
on the workers side i have one file 'tasks.py' which have the actual implementation of the task:
Worker\tasks.py
from __future__ import absolute_import
from celery.utils.log import get_task_logger
from celery import Celery
logger = get_task_logger(__name__)
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@lue2016@cvc.ma/cvcHost', backend='redis://:C@pV@lue2016@cvc.ma:6379/0')
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
try:
"""
The actual implementation of the task
"""
except Exception as exc:
self.retry(exc=exc)
what i did notice though is:
when i change the broker settings in my workers to a bad password, i get could not connect to broker error.
when i change the result backend settings in my workers to a bad password, it runs normally as if everything is OK.
What could be possibly causing me those problems ?
EDIT
on my Redis server, i already enabled remote connection
/etc/redis/redis.conf
...
bind 0.0.0.0
...
|
My guess is that your problem is in the password.
Your password has @ in it, which could be interpreted as a divider between the user:pass and the host section.
The workers stay in pending because they could not connect to the broker correctly.
From celery's documentation
http://docs.celeryproject.org/en/latest/userguide/tasks.html#pending
PENDING
Task is waiting for execution or unknown. Any task id that is not known is implied to be in the pending state.
|
Django application 504 error after saving model
|
I have a Django website running Django 1.8 with Python 3.4 and hosted on AWS via ElasticBeanstalk.
Recently, I've been having some issues with the Django admin area and 504 errors. The problem is very difficult to reproduce, it seems to happen randomly.
When I save an instance of a model, sometimes the website hangs and returns a 504 error (and doesn't save). Afterwhich elasticbeanstalk restarts the server and everything works fine again.
In my logs I get the following errors.
End of script output before headers: wsgi.py
extern "Python": function Cryptography_rand_bytes() called, but @ffi.def_extern() was not called in the current subinterpreter. Returning 0.
These two errors are repeated multiple times. Can anyone help me figure out how I can debug this?
Thank you!
|
It is probably due to this bug
https://github.com/pyca/cryptography/issues/2299
How to fix discussed here
https://github.com/pyca/cryptography/issues/2473
Which seems to say uninstall python Cryptography library and then pip install version 1.1 of it
|
check if two lists are equal by type Python
|
I want to check if two lists have the same type of items for every index. For example if I have
y = [3, "a"]
x = [5, "b"]
z = ["b", 5]
the check should be True for x and y.
The check should be False for y and z because the types of the elements at the same positions are not equal.
|
Just map the elements to their respective type and compare those:
>>> x = [5, "b"]
>>> y = [3, "a"]
>>> z = ["b", 5]
>>> map(type, x) == map(type, y)
True
>>> map(type, x) == map(type, z)
False
For Python 3, you will also have to turn the map generators into proper lists, either by using the list function or with a list comprehension:
>>> list(map(type, x)) == list(map(type, y))
True
>>> [type(i) for i in x] == [type(i) for i in z]
False
I did some timing analysis, comparing the above solution to that of @timgeb, using all and izip and inputs with the first non-matching type in different positions. As expected, the time taken for the map solution is almost exactly the same for each input, while the all + izip solution can be very fast or take three times as long, depending on the position of the first difference.
In [52]: x = [1] * 1000 + ["s"] * 1000
In [53]: y = [2] * 1000 + ["t"] * 1000 # same types as x
In [54]: z = ["u"] * 1000 + [3] * 1000 # difference at first element
In [55]: u = [4] * 2000 # difference after first half
In [56]: %timeit map(type, x) == map(type, y)
10000 loops, best of 3: 129 µs per loop
In [58]: %timeit all(type(i) == type(j) for i, j in izip(x, y))
1000 loops, best of 3: 342 µs per loop
In [59]: %timeit all(type(i) == type(j) for i, j in izip(x, z))
1000000 loops, best of 3: 748 ns per loop
In [60]: %timeit all(type(i) == type(j) for i, j in izip(x, u))
10000 loops, best of 3: 174 µs per loop
|
How to check if two permutations are symmetric?
|
Given two permutations A and B of L different elements, L is even, let's call these permutations "symmetric" (for a lack of a better term), if there exist n and m, m > n such as (in python notation):
- A[n:m] == B[L-m:L-n]
- B[n:m] == A[L-m:L-n]
- all other elements are in place
Informally, consider
A = 0 1 2 3 4 5 6 7
Take any slice of it, for example 1 2. It starts at the second index and its length is 2. Now take a slice symmetric to it: it ends at the penultimate index and is 2 chars long too, so it's 5 6. Swapping these slices gives
B = 0 5 6 3 4 1 2 7
Now, A and B are "symmetric" in the above sense (n=1, m=3). On the other hand
A = 0 1 2 3 4 5 6 7
B = 1 0 2 3 4 5 7 6
are not "symmetric" (no n,m with above properties exist).
How can I write an algorithm in python that finds if two given permutations (=lists) are "symmetric" and if yes, find the n and m? For simplicity, let's consider only even L (because the odd case can be trivially reduced to the even one by eliminating the middle fixed element) and assume correct inputs (set(A)==set(B), len(set(A))==len(A)).
(I have no problem bruteforcing all possible symmetries, but looking for something smarter and faster than that).
Fun fact: the number of symmetric permutations for the given L is a Triangular number.
I use this code to test out your answers.
Bounty update: many excellent answers here. @Jared Goguen's solution appears to be the fastest.
Final timings:
testing 0123456789 L= 10
test_alexis ok in 15.4252s
test_evgeny_kluev_A ok in 30.3875s
test_evgeny_kluev_B ok in 27.1382s
test_evgeny_kluev_C ok in 14.8131s
test_ian ok in 26.8318s
test_jared_goguen ok in 10.0999s
test_jason_herbburn ok in 21.3870s
test_tom_karzes ok in 27.9769s
|
Here is the working solution for the question:
def isSymmetric(A, B):
L = len(A) #assume equivalent to len(B), modifying this would be as simple as checking if len(A) != len(B), return []
la = L//2 # half-list length
Al = A[:la]
Ar = A[la:]
Bl = B[:la]
Br = B[la:]
for i in range(la):
lai = la - i #just to reduce the number of computation we need to perform
for j in range(1, lai + 1):
k = lai - j #same here, reduce computation
if Al[i] != Br[k] or Ar[k] != Bl[i]: #the key for efficient computation is here: do not proceed unnecessarily
continue
n = i #written only for the sake of clarity. i is n, and we can use i directly
m = i + j
if A[n:m] == B[L-m:L-n] and B[n:m] == A[L-m:L-n]: #possibly symmetric
if A[0:n] == B[0:n] and A[m:L-m] == B[m:L-m] and A[L-n:] == B[L-n:]:
return [n, m]
return []
As you have mentioned, though the idea looks simple, but it is actually quite a tricky one. Once we see the patterns, however, the implementation is straight-forward.
The central idea of the solution is this single line:
if Al[i] != Br[k] or Ar[k] != Bl[i]: #the key for efficient computation is here: do not proceed unnecessarily
All other lines are just either direct code translation from the problem statement or optimization made for more efficient computation.
There are few steps involved in order to find the solution:
Firstly, we need to split the each both list Aand list B into two half-lists (called Al, Ar, Bl, and Br). Each half-list would contain half of the members of the original lists:
Al = A[:la]
Ar = A[la:]
Bl = B[:la]
Br = B[la:]
Secondly, to make the evaluation efficient, the goal here is to find what I would call pivot index to decide whether a position in the list (index) is worth evaluated or not to check if the lists are symmetric. This pivot index is the central idea to find an efficient solution. So I would try to elaborate it quite a bit:
Consider the left half part of the A list, suppose you have a member like this:
Al = [al1, al2, al3, al4, al5, al6]
We can imagine that there is a corresponding index list for the mentioned list like this
Al = [al1, al2, al3, al4, al5, al6]
iAl = [0, 1, 2, 3, 4, 5 ] #corresponding index list, added for explanation purpose
(Note: the reason why I mention of imagining a corresponding index list is for ease of explanation purposes)
Likewise, we can imagine that the other three lists may have similar index lists. Let's name them iAr, iBl, and iBr respectively and they are all having identical members with iAl.
It is the index of the lists which would really matter for us to look into - in order to solve the problem.
Here is what I mean: suppose we have two parameters:
index (let's give a variable name i to it, and I would use symbol ^ for current i)
length (let's give a variable name j to it, and I would use symbol == to visually represent its length value)
for each evaluation of the index element in iAl - then each evaluation would mean:
Given an index value i and length value of j in iAl, do
something to determine if it is worth to check for symmetric
qualifications starting from that index and with that length
(Hence the name pivot index come).
Now, let's take example of one evaluation when i = 0 and j = 1. The evaluation can be illustrated as follow:
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 0
== <-- now this has length (j) of 1
In order for those index i and length j to be worth evaluated further, then the counterpart iBr must have the same item value with the same length but on different index (let's name it index k)
iBr = [0, 1, 2, 3, 4, 5]
^ <-- must compare the value in this index to what is pointed by iAl
== <-- must evaluate with the same length = 1
For example, for the above case, this is a possible "symmetric" permutation just for the two lists Al-Br (we will consider the other two lists Ar-Bl later):
Al = [0, x, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, x, 0]
At this moment, it is good to note that
It won't worth evaluating further if even the above condition is not
true
And this is where you get the algorithm to be more efficient; that is, by selectively evaluating only the few possible cases among all possible cases. And how to find the few possible cases?
By trying to find relationship between indexes and lengths of the
four lists. That is, for a given index i and length j in a
list (say Al), what must be the index k in the counterpart
list (in the case is Br). Length for the counterpart list need not
be found because it is the same as in the list (that is j).
Having know that, let's now proceed further to see if we can see more patterns in the evaluation process.
Consider now the effect of length (j). For example, if we are to evaluate from index 0, but the length is 2 then the counterpart list would need to have different index k evaluated than when the length is 1
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 0
===== <-- now this has length (j) of 2
iBr = [0, 1, 2, 3, 4, 5]
^ <-- must compare the value in this index to what is pointed by iAl
===== <-- must evaluate with the same length = 2
Or, for the illustration above, what really matters fox i = 0 and y = 2 is something like this:
# when i = 0 and y = 2
Al = [0, y, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, 0, y] #y means to be checked later
Take a look that the above pattern is a bit different from when i = 0 and y = 1 - the index position for 0 value in the example is shifted:
# when i = 0 and y = 1, k = 5
Al = [0, x, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, x, 0]
# when i = 0 and y = 2, k = 4
Al = [0, y, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, 0, y] #y means to be checked later
Thus, length shifts where the index of the counterpart list must be checked. In the first case, when i = 0 and y = 1, then the k = 5. But in the second case, when i = 0 and y = 1, then the k = 4. Thus we found the pivot indexes relationship when we change the length j for a fixed index i (in this case being 0) unto the counterpart list index k.
Now, consider the effects of index i with fixed length j for counterpart list index k. For example, let's fix the length as y = 4, then for index i = 0, we have:
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 0
========== <-- now this has length (j) of 4
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 1
========== <-- now this has length (j) of 4
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 2
========== <-- now this has length (j) of 4
#And no more needed
In the above example, it can be seen that we need to evaluate 3 possibilities for the given i and j, but if the index i is changed to 1 with the same length j = 4:
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 1
========== <-- now this has length (j) of 4
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 2
========== <-- now this has length (j) of 4
Note that we only need to evaluate 2 possibilities. Thus the increase of index i decreases the number of possible cases to be evaluated!
With all the above patterns found, we almost found all the basis we need to make the algorithm works. But to complete that, we need to find the relationship between indexes which appear in Al-Br pair for a given [i, j] => [k, j] with the indexes in Ar-Bl pair for the same [i, j].
Now, we can actually see that they are simply mirroring the relationship we found in Al-Br pair!
(IMHO, this is really beautiful! and thus I think term "symmetric" permutation is not far from truth)
For example, if we have the following Al-Br pair evaluated with i = 0 and y = 2
Al = [0, y, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, 0, y] #y means to be checked later
Then, to make it symmetric, we must have the corresponding Ar-Bl:
Ar = [x, x, x, x, 3, y] #x means don't care for now
Bl = [3, y, x, x, x, x] #y means to be checked later
The indexing of Al-Br pair is mirroring (or, is symmetric to) the indexing of Ar-Bl pair!
Therefore, combining all the pattern we found above, we now could find the pivot indexes for evaluating Al, Ar, Bl, and Br.
We only need to check the values of the lists in the pivot index
first. If the values of the lists in the pivot indexes of Al, Ar, Bl, and Br
matches in the evaluation then and only then we need to check
for symmetric criteria (thus making the computation efficient!)
Putting up all the knowledge above into code, the following is the resulting for-loop Python code to check for symmetricity:
for i in range(len(Al)): #for every index in the list
lai = la - i #just simplification
for j in range(1, lai + 1): #get the length from 1 to la - i + 1
k = lai - j #get the mirror index
if Al[i] != Br[k] or Ar[k] != Bl[i]: #if the value in the pivot indexes do not match
continue #skip, no need to evaluate
#at this point onwards, then the values in the pivot indexes match
n = i #assign n
m = i + j #assign m
#test if the first two conditions for symmetric are passed
if A[n:m] == B[L-m:L-n] and B[n:m] == A[L-m:L-n]: #possibly symmetric
#if it passes, test the third condition for symmetric, the rests of the elements must stay in its place
if A[0:n] == B[0:n] and A[m:L-m] == B[m:L-m] and A[L-n:] == B[L-n:]:
return [n, m] #if all three conditions are passed, symmetric lists are found! return [n, m] immediately!
#passing this but not outside of the loop means
#any of the 3 conditions to find symmetry are failed
#though values in the pivot indexes match, simply continue
return [] #nothing can be found - asymmetric lists
And there go you with the symmetric test!
(OK, this is quite a challenge and it takes quite a while for me to figure out how.)
|
Using python decorator with or without parentheses
|
What is the difference in Python when using the same decorator with or without parentheses? For example:
Without parentheses
@someDecorator
def someMethod():
pass
With parentheses
@someDecorator()
def someMethod():
pass
|
someDecorator in the first code snippet is a regular decorator:
@someDecorator
def someMethod():
pass
is equivalent to
someMethod = someDecorator(someMethod)
On the other hand, someDecorator in the second code snippet is a callable that returns a decorator:
@someDecorator()
def someMethod():
pass
is equivalent to
someMethod = someDecorator()(someMethod)
As pointed out by Duncan in comments, some decorators are designed to work both ways. Here's a pretty basic implementation of such decorator:
def someDecorator(arg=None):
def decorator(func):
def wrapper(*a, **ka):
return func(*a, **ka)
return wrapper
if callable(arg):
return decorator(arg) # return 'wrapper'
else:
return decorator # ... or 'decorator'
pytest.fixture is a more complex example.
|
Python - If not statement with 0.0
|
I have a question regarding if not statement in Python 2.7.
I have written some code and used if not statements. In one part of the code I wrote, I refer to a function which includes an if not statement to determine whether an optional keyword has been entered.
It works fine, except when 0.0 is the keyword's value. I understand this is because 0 is one of the things that is considered 'not'. My code is probably too long to post, but this is an analogous (albeit simplified) example:
def square(x=None):
if not x:
print "you have not entered x"
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print output
However, in this case I got left with:
you have not entered x
[1, 9, None, 81]
Where as I would like to get:
[1, 9, 0, 81]
In the above example I could use a list comprehension, but assuming I wanted to use the function and get the desired output how could I do this?
One thought I had was:
def square(x=None):
if not x and not str(x).isdigit():
print "you have not entered x"
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print output
This works, but seems like a bit of a clunky way of doing it. If anyone has another way that would be nice I would be very appreciative.
|
Problem
You understand it right. not 0 (and also not 0.0) returns True in Python. Simple test can be done to see this:
a = not 0
print(a)
Result: True
Thus, the problem is explained. This line:
if not x:
Must be changed to something else.
Solutions
There are couple of ways which can be done to fix the issue. I am just going to list them from what I think is the best solution down to the last possible solutions:
To handle all possible valid cases.
Since square should naturally expect a number input with the exclusion of complex number and should return an error otherwise, I think the best solution is to evaluate using if not isinstance(x, numbers.Number) or isinstance(x, numbers.Complex):
def square(x=None):
if not isinstance(x, numbers.Number) or isinstance(x, numbers.Complex): # this sums up every number type, with the exclusion of complex number
print ("you have not entered x")
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print (output)
numbers.Number is the abstract class to check if argument x is a number (credit to Copperfield for pointing this out).
Excerpt from Python Standard Library Documentation explains just what you need - with the exception of complex number:
class numbers.Number
The root of the numeric hierarchy. If you just want to check if an
argument x is a number, without caring what kind, use isinstance(x,
Number).
But, you don't want the input to be complex number. So, just omit it using or isinstance(x, numbers.Complex)
This way, you write the definition of square exactly the way you
want it. This solution, I think, is the best solution by the virtue of
its comprehensiveness.
To handle just the data types you want to handle.
If you have a list valid inpug data types you, you could also put up just those specific data types you want to handle. That is, you don't want to handle the cases for data types other than what you have specified. Examples:
if not instance(x, int): #just handle int
if not instance(x, (int, float)): #just handle int and float
if not instance(x, (numbers.Integral, numbers.Rational)): #just handle integral and rational, not real or complex
You may change/extend the condition above easily for different data
types that you want to include or to excluded - according to your
need. This solution, I think, is the second best by the virtue of its
customization for its validity checking.
(Code above is done in more Pythonical way, as suggested by cat)
Not handling impossible cases: you know what the users would not put up as input.
Think it more loosely, if you know - not the data types you want to handle like in the second solution - but the data types which the user would not put, then you can have looser condition check like this:
if not isinstance(x, numbers.Number): # this is ok, because the user would not put up complex number
This solution, I think, is the third best by the virtue of being one
of the simplest yet powerful checking.
The only downside of this solution is that you don't handle complex type. Therefore can only be implementing by owing to the fact that the users would not have complex number as the input.
To handle input errors only for the known possible inputs which can cause the errors.
For example, if you know that x is always int or None - and thus the only possible input error is None - then we can simply write the logic to avoid y being evaluated only when x is None like this:
def square(x=None):
if x is None:
print ("you have not entered x")
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print (output)
This solution has the virtue of being the simplest.
...and yet the most dangerous for being used if you do not know exactly what the users would put up for the input. Otherwise, this solution is fine and is also the simplest.
Your solution, I think more or less belongs to this category. You know what input the user will give and what the user will not. Thus, using this solution or your own solution:
if not x and not str(x).isdigit():
Is fine, except that the example solution is simpler
Given your case, you can use any solution above to get:
[1, 9, 0, 81]
(Side Note: I try to format the solutions to look like "canonical solutions" for ease of reading purpose. This way, those who have the same questions and who visit this page in the future may be able to find the solutions more comprehensive and readable)
|
Why does bit-wise shift left return different results in Python and Java?
|
I'm trying to port some functionality from a Java app to Python.
In Java,
System.out.println(155 << 24);
Returns: -1694498816
In Python:
print(155 << 24)
Returns 2600468480
Many other bitwise operations have worked in the same way in both languages. Why is there a different result in these two operations?
EDIT: I'm trying to create a function in python to replicate how the left shift operator works in Java. Something along the lines of:
def lshift(val, n):
return (int(val) << n) - 0x100000000
However this doesn't seem right as (I think) it turns all numbers negatives?
EDIT2: Several hours later, I've decided it is probably not the best idea to use Python for this job and will take part of the Java application and use it as a micro service for the existing Python app.
|
Java has 32-bit fixed width integers, so 155 << 24 shifts the uppermost set bit of 155 (which is bit 7, counting bits from zero, because 155 is greater than 27 but less than 28) into the sign bit (bit 31) and you end up with a negative number.
Python has arbitrary-precision integers, so 155 << 24 is numerically equal to the positive number 155 Ã 224
|
Loop while checking if element in a list in Python
|
Let's say I have a simple piece of code like this:
for i in range(1000):
if i in [150, 300, 500, 750]:
print(i)
Does the list [150, 300, 500, 750] get created every iteration of the loop? Or can I assume that the interpreter (say, CPython 2.7) is smart enough to optimize this away?
|
You can view the bytecode using dis.dis. Here's the output for CPython 2.7.11:
2 0 SETUP_LOOP 40 (to 43)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 26 (to 42)
16 STORE_FAST 0 (i)
3 19 LOAD_FAST 0 (i)
22 LOAD_CONST 6 ((150, 300, 500, 750))
25 COMPARE_OP 6 (in)
28 POP_JUMP_IF_FALSE 13
4 31 LOAD_FAST 0 (i)
34 PRINT_ITEM
35 PRINT_NEWLINE
36 JUMP_ABSOLUTE 13
39 JUMP_ABSOLUTE 13
>> 42 POP_BLOCK
>> 43 LOAD_CONST 0 (None)
46 RETURN_VALUE
Hence, the list creation is optimized to the loading of a constant tuple (byte 22). The list (which is in reality a tuple in this case) is not created anew on each iteration.
|
Removing data between double squiggly brackets with nested sub brackets in python
|
I'm having some difficulty with this problem. I need to remove all data that's contained in squiggly brackets.
Like such:
Hello {{world of the {{ crazy}} {{need {{ be}}}} sea }} there.
Becomes:
Hello there.
Here's my first try (I know it's terrible):
while 1:
firstStartBracket = text.find('{{')
if (firstStartBracket == -1):
break;
firstEndBracket = text.find('}}')
if (firstEndBracket == -1):
break;
secondStartBracket = text.find('{{',firstStartBracket+2);
lastEndBracket = firstEndBracket;
if (secondStartBracket == -1 or secondStartBracket > firstEndBracket):
text = text[:firstStartBracket] + text[lastEndBracket+2:];
continue;
innerBrackets = 2;
position = secondStartBracket;
while innerBrackets:
print innerBrackets;
#everytime we find a next start bracket before the ending add 1 to inner brackets else remove 1
nextEndBracket = text.find('}}',position+2);
nextStartBracket = text.find('{{',position+2);
if (nextStartBracket != -1 and nextStartBracket < nextEndBracket):
innerBrackets += 1;
position = nextStartBracket;
# print text[position-2:position+4];
else:
innerBrackets -= 1;
position = nextEndBracket;
# print text[position-2:position+4];
# print nextStartBracket
# print lastEndBracket
lastEndBracket = nextEndBracket;
print 'pos',position;
text = text[:firstStartBracket] + text[lastEndBracket+2:];
It seems to work but runs out of memory quite fast. Is there any better way to do this (hopefully with regex)?
EDIT: I was not clear so I'll give another example. I need to allow for multiple top level brackets.
Like such:
Hello {{world of the {{ crazy}} {{need {{ be}}}} sea }} there {{my }} friend.
Becomes:
Hello there friend.
|
You can use pyparsing module here. Solution based on this answer:
from pyparsing import nestedExpr
s = "Hello {{world of the {{ crazy}} {{need {{ be}}}} sea }} there {{my }} friend."
expr = nestedExpr('{{', '}}')
result = expr.parseString("{{" + s + "}}").asList()[0]
print(" ".join(item for item in result if not isinstance(item, list)))
Prints:
Hello there friend.
The following would only work if there is only one top-level pair of braces.
If you want to remove everything inside the double curly braces with the braces themselves:
>>> import re
>>>
>>> s = "Hello {{world of the {{ crazy}} {{need {{ be}}}} sea }} there."
>>> re.sub(r"\{\{.*\}\} ", "", s)
'Hello there.'
\{\{.*\}\} would match double curly braces followed by any characters any number of times (intentionally left it "greedy") followed by double curly braces and a space.
|
Merging Key-Value Pairings in Dictionary
|
I have a dictionary that consists of employee-manager as key-value pairs:
{'a': 'b', 'b': 'd', 'c': 'd', 'd': 'f'}
I want to show the relations between employee-manager at all levels (employee's boss, his boss's boss, his boss's boss's boss etc.) using a dictionary. The desired output is:
{'a': [b,d,f], 'b': [d,f], 'c': [d,f], 'd': [f] }
Here is my attempt which only shows the first level:
for key, value in data.items():
if (value in data.keys()):
data[key] = [value]
data[key].append(data[value])
I can do another conditional statement to add the next level but this would be the wrong way to go about it. I'm not very familiar with dictionaries so what would be a better approach?
|
>>> D = {'a': 'b', 'b': 'd', 'c': 'd', 'd': 'f'}
>>> res = {}
>>> for k in D:
... res[k] = [j] = [D[k]]
... while j in D:
... j = D[j]
... res[k].append(j)
...
>>> res
{'b': ['d', 'f'], 'c': ['d', 'f'], 'd': ['f'], 'a': ['b', 'd', 'f']}
|
An Object is created twice in Python
|
I have read Expert Python Programming which has an example for multi-inheritance. The book author has explained but I did not understand it, so I would like to have another view.
The example shows that object B is created two times!
Could you please give me an intuitive explanation.
In [1]: class A(object):
...: def __init__(self):
...: print "A"
...: super(A, self).__init__()
In [2]: class B(object):
...: def __init__(self):
...: print "B"
...: super(B, self).__init__()
In [3]: class C(A,B):
...: def __init__(self):
...: print "C"
...: A.__init__(self)
...: B.__init__(self)
In [4]: print "MRO:", [x.__name__ for x in C.__mro__]
MRO: ['C', 'A', 'B', 'object']
In [5]: C()
C
A
B
B
Out[5]: <__main__.C at 0x3efceb8>
The book author said:
This happens due to the A.__init__(self) call, which is made with the
C instance, thus making super(A, self).__init__() call B's constructor
The point from which I didn't get its idea is how A.__init__(self) call will make super(A, self).__init__() call B's constructor
|
The super() just means "next in line", where the line is the mro ['C', 'A', 'B', 'object']. So next in line for A is B.
The mro is calculated according to an algorithm called C3 linearization.
When you use super(), Python just goes along this order. When you write your class A you don't know yet which class will be next in line. Only after you create your class C with multiple inheritance and run your program you will get the mro and "know" what will be next for A.
For your example it means:
C() calls the __init__() of C, in which it calls the __init__() of A. Now, A uses super() and finds B in the mro, hence it calls the __init__() of B. Next, the __init__() of C calls the __init__() of B again.
Calling super() in the __init__() creates a different mro and avoids the double call to the __init__() of B.
from __future__ import print_function
class A(object):
def __init__(self):
print("A")
super(A, self).__init__()
class B(object):
def __init__(self):
print("B")
super(B, self).__init__()
class C(A,B):
def __init__(self):
print("C")
super(C, self).__init__()
Use:
>>> C.mro()
[__main__.C, __main__.A, __main__.B, object]
>> C()
C
A
B
|
Using a pre-trained word embedding (word2vec or Glove) in TensorFlow
|
I've recently reviewed an interesting implementation for convolutional text classification. However all TensorFlow code I've reviewed uses a random (not pre-trained) embedding vectors like the following:
with tf.device('/cpu:0'), tf.name_scope("embedding"):
W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x)
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)
Does anybody know how to use the results of Word2vec or a GloVe pre-trained word embedding instead of a random one?
|
There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup().
Simply create W as a tf.constant() that takes embedding as its value:
W = tf.constant(embedding, name="W")
This is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples.
Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder():
W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),
trainable=False, name="W")
embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])
embedding_init = W.assign(embedding_placeholder)
# ...
sess = tf.Session()
sess.run(embedding_init, feed_dict={embedding_placeholder: embedding})
This avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False.
If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following:
W = tf.Variable(...)
embedding_saver = tf.train.Saver({"name_of_variable_in_other_model": W})
# ...
sess = tf.Session()
embedding_saver.restore(sess, "checkpoint_filename.ckpt")
|
Why is there a performance difference between the order of a nested loop?
|
I have a process that loops through two lists, one being relatively large while the other being significantly smaller.
Example:
larger_list = list(range(15000))
smaller_list = list(range(2500))
for ll in larger_list:
for sl in smaller_list:
pass
I scaled the sized down of the lists to test performance, and I noticed there is a decent difference between which list is looped through first.
import timeit
larger_list = list(range(150))
smaller_list = list(range(25))
def large_then_small():
for ll in larger_list:
for sl in smaller_list:
pass
def small_then_large():
for sl in smaller_list:
for ll in larger_list:
pass
print('Larger -> Smaller: {}'.format(timeit.timeit(large_then_small)))
print('Smaller -> Larger: {}'.format(timeit.timeit(small_then_large)))
>>> Larger -> Smaller: 114.884992572
>>> Smaller -> Larger: 98.7751009799
At first glance, they look identical - however there is 16 second difference between the two functions.
Why is that?
|
When you disassemble one of your functions you get:
>>> dis.dis(small_then_large)
2 0 SETUP_LOOP 31 (to 34)
3 LOAD_GLOBAL 0 (smaller_list)
6 GET_ITER
>> 7 FOR_ITER 23 (to 33)
10 STORE_FAST 0 (sl)
3 13 SETUP_LOOP 14 (to 30)
16 LOAD_GLOBAL 1 (larger_list)
19 GET_ITER
>> 20 FOR_ITER 6 (to 29)
23 STORE_FAST 1 (ll)
4 26 JUMP_ABSOLUTE 20
>> 29 POP_BLOCK
>> 30 JUMP_ABSOLUTE 7
>> 33 POP_BLOCK
>> 34 LOAD_CONST 0 (None)
37 RETURN_VALUE
>>>
Looking at address 29 & 30, it looks like these will execute every time the inner loop ends. The two loops look basically the same, but these two instructions are executed each time the inner loop exits. Having the smaller number on the inside would cause these to be executed more often, hence increasing the time (vs the larger number on the inner loop).
|
Find tuple structure containing an unknown value inside a list
|
Say I have list of tuples:
list = [(1,5), (1,7), (2,3)]
Is there a way in Python to write something like
if (1, *) in list: do things
where * means "I donât care about this value"? So we are checking if there is a tuple with 1 at the first position and with whatever value on the second one.
As far as I know there are special mechanisms in other languages, but I just donât know the name of this particular problem. So is there similar behavior in Python?
P.S.: I know that I can use list comprehensions here. I am just interested in this particular mechanism.
|
You can use the any() function:
if any(t[0] == 1 for t in yourlist):
This efficiently tests and exits early if 1 is found in the first position of a tuple.
|
module imports and __init__.py in Python
|
I am trying to understand what the best practices are with regards to Python's (v2.7) import mechanics. I have a project that has started to grow a bit and lets say my code is organised as follows:
foo/
__init__.py
Foo.py
module1.py
module2.py
module3.py
The package name is foo and underneath it I have module Foo.py which contains code for the class Foo. Hence I am using the same name for the package, module and class which might not be very clever to start with.
__init__.py is empty and class Foo needs to import module1, module2 and module3 hence part of my Foo.py file looks like:
# foo/Foo.py
import module1
import module2
import module3
class Foo(object):
def __init__(self):
....
....
if __name__ == '__main__':
foo_obj = Foo()
However I later revisited this and I thought it would be better to have all imports in the __init__.py file. Hence my __init__.py now looks like:
# foo/__init__.py
import Foo
import module1
import module2
import module3
....
....
and my Foo.py only needs to import foo:
# foo/Foo.py
import foo
While this looks convenient since it is an one liner, I am a bit worried that it might be creating circular imports. What I mean is that when the script Foo.py is run it will import everything it can and then __init__.py will be called which will import Foo.py again (is that correct?). Additionally using the same name for package, module and class makes things more confusing.
Does it make sense the way I have done it? Or am I asking for trouble?
|
A couple things you could do to improve your organizaton, if only to adhere to some popular python conventions and standards.
If you search this topic, you will inevitably run across people recommending the PEP8 guidelines. These are the de facto canonical standards for organizing python code.
Modules should have short, all-lowercase names. Underscores can be
used in the module name if it improves readability. Python packages
should also have short, all-lowercase names, although the use of
underscores is discouraged.
Based on these guidelines, you're project modules should be named and organized thusly:
foo/
__init__.py
foo.py
module1.py
module2.py
module3.py
I find it's generally best to avoid importing modules unnecessarily in __init__.py unless you're doing it for namespace reasons. For example, if you want the namespace for your package to look like this
from foo import Foo
instead of
from foo.foo import Foo
Then it makes sense to put
from .foo import Foo
in your __init__.py. As your package gets larger, some users may not want to use all of the sub-packages and modules, so it doesn't make sense to force the user to wait for all those modules to load by implicitly importing them in your __init__.py. Also, you have to consider whether you even want module1, module2, and module3 as part of your external API. Are they only used by Foo and not intended to be for end users? If they're only used internally, then don't include them in the __init__.py
I'd also recommend using absolute or explicit relative imports for importing sub-modules. For example, in foo.py
Absolute
from foo import module1
from foo import module2
from foo import module3
Explicit Relative
from . import module1
from . import module2
from . import module3
This will prevent any possible naming issues with other packages and modules. It will also make it easier if you decide to support Python3, since the implicit relative import syntax you're currently using is not supported in Python3.
Also, files inside your package generally shouldn't contain a
if __name__ == '__main__'
This is because running a file as a script means it won't be considered part of the package that it belongs to, so it won't be able to make relative imports.
The best way to provide executable scripts to users is by using the scripts or console_scripts feature of setuptools. The way you organize your scripts can be different depending on which method you use, but I generally organize mine like this:
foo/
__init__.py
foo.py
...
scripts/
foo_script.py
setup.py
|
Problems using MySQL with AWS Lambda in Python
|
I am trying to get up and running with AWS Lambda Python (beginner in Python btw) but having some problems with including MySQL dependency. I am trying to follow the instructions here on my Mac.
For step number 3, I am getting some problems with doing the command at the root of my project
sudo pip install MySQL-python -t /
Error:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-1.5.6-py2.7.egg/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-1.5.6-py2.7.egg/pip/commands/install.py", line 311, in run
os.path.join(options.target_dir, item)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 292, in move
raise Error, "Destination path '%s' already exists" % real_dst
Error: Destination path '/MySQL_python-1.2.5-py2.7.egg-info/MySQL_python-1.2.5-py2.7.egg-info' already exists
I end up writing my following lambda function (works fine on my Mac), which is:
import MySQLdb
def lambda_handler(event, context):
# Open database connection
db = MySQLdb.connect(...)
# prepare a cursor object using cursor() method
cursor = db.cursor()
sql = "SELECT * FROM Users"
try:
# Execute the SQL command
cursor.execute(sql)
# Fetch all the rows in a list of lists.
results = cursor.fetchall()
for row in results:
fname = row[0]
lname = row[1]
age = row[2]
sex = row[3]
income = row[4]
# Now print fetched result
print ("lname=%s" %(lname))
except:
print "Error: unable to fecth data"
# disconnect from server
db.close()
What I went on to do is go to /Library/Python/2.7/site-packages and copying over the the MySQLdb folders/files that were downloaded when I did sudo pip install MySQL-python (without -t /) (I'm sure I'm doing something wrong here), to my lambda project, and then zipped the content along with the lambda_function.py and uploaded to AWS Lambda.
Then I get:
Unable to import module 'lambda_function': No module named MySQLdb
Grateful for any help and suggestions!
EDIT
Was able to do make sudo pip install MySQL-python -t /pathToProject work (thanks for the help in the comments) but now I get this when runing the lambda function:
Unable to import module 'lambda_function': /var/task/_mysql.so: invalid ELF header
I know that if I work on a Linux box, then it should work fine (as suggested by some people), but I am wondering if I can make it work from an OS X box.
|
For a use case like Lambda you'll be a lot happier using a pure python implementation like PyMySQL.
It's a drop in replacement for MySQLdb that follows the Python Database API specification. For most things like triggered Lambda events it will be just as fast.
I've used it in production a lot and it works great.
|
Python: understanding class and instance variables
|
I think I have some misconception about class and instance variables. Here is an example code:
class Animal(object):
energy = 10
skills = []
def work(self):
print 'I do something'
self.energy -= 1
def new_skill(self, skill):
self.skills.append(skill)
if __name__ == '__main__':
a1 = Animal()
a2 = Animal()
a1.work()
print a1.energy # result:9
print a2.energy # result:10
a1.new_skill('bark')
a2.new_skill('sleep')
print a1.skills # result:['bark', 'sleep']
print a2.skills # result:['bark', 'sleep']
I thought that energy and skill were class variables, because I declared them out of any method. I modify its values inside the methods in the same way (with self in his declaration, maybe incorrect?). But the results show me that energy takes different values for each object (like a instance variable), while skills seems to be shared (like a class variable). I think I've missed something important...
|
The trick here is in understanding what self.energy -= 1 does. It's really two expressions; one getting the value of self.energy - 1, and one assigning that back to self.energy.
But the thing that's confusing you is that the references are not interpreted the same way on both sides of that assignment. When Python is told to get self.energy, it tries to find that attribute on the instance, fails, and falls back to the class attribute. However, when it assigns to self.energy, it will always assign to an instance attribute, even though that hadn't previously existed.
|
Why do dict keys support list subtraction but not tuple subtraction?
|
Presumably dict_keys are supposed to behave as a set-like object, but they are lacking the difference method and the subtraction behaviour seems to diverge.
>>> d = {0: 'zero', 1: 'one', 2: 'two', 3: 'three'}
>>> d.keys() - [0, 2]
{1, 3}
>>> d.keys() - (0, 2)
TypeError: 'int' object is not iterable
Why does dict_keys class try to iterate an integer here? Doesn't that violate duck-typing?
>>> dict.fromkeys(['0', '1', '01']).keys() - ('01',)
{'01'}
>>> dict.fromkeys(['0', '1', '01']).keys() - ['01',]
{'1', '0'}
|
This looks to be a bug. The implementation is to convert the dict_keys to a set, then call .difference_update(arg) on it.
It looks like they misused _PyObject_CallMethodId (an optimized variant of PyObject_CallMethod), by passing a format string of just "O". Thing is, PyObject_CallMethod and friends are documented to require a Py_BuildValue format string that "should produce a tuple". With more than one format code, it wraps the values in a tuple automatically, but with only one format code, it doesn't tuple, it just creates the value (in this case, because it's already PyObject*, all it does is increment the reference count).
While I haven't tracked down where it might be doing this, I suspect somewhere in the internals it's identifying CallMethod calls that don't produce a tuple and wrapping them to make a one element tuple so the called function can actually receive the arguments in the expected format. When subtracting a tuple, it's already a tuple, and this fix up code never activates; when passing a list, it does, becoming a one element tuple containing the list.
difference_update takes varargs (as if it were declared def difference_update(self, *args)). So when it receives the unwrapped tuple, it thinks it's supposed to subtract away the elements from each entry in the tuple, not treat said entries as values to subtract away themselves. To illustrate, when you do:
mydict.keys() - (1, 2)
the bug is causing it to do (roughly):
result = set(mydict)
# We've got a tuple to pass, so all's well...
result.difference_update(*(1, 2)) # Unpack behaves like difference_update(1, 2)
# OH NO!
While:
mydict.keys() - [1, 2]
does:
result = set(mydict)
# [1, 2] isn't a tuple, so wrap
result.difference_update(*([1, 2],)) # Behaves like difference_update([1, 2])
# All's well
That's why a tuple of str works (incorrectly), - ('abc', '123') is performing a call equivalent to:
result.difference_update(*('abc', '123'))
# or without unpacking:
result.difference_update('abc', '123')
and since strs are iterables of their characters, it just blithely removes entries for 'a', 'b', 'c', etc. instead of 'abc' and '123' like you expected.
Basically, this is a bug and (when I get a chance), I'll file it against the CPython folks.
The correct behavior probably should have been to call (assuming this Id variant exists for this API):
_PyObject_CallMethodObjArgsId(result, &PyId_difference_update, other, NULL);
which wouldn't have the packing issues at all, and would run faster to boot; the smallest change would be to change the format string to "(O)" to force tuple creation even for a single item, but since the format string gains nothing, _PyObject_CallMethodObjArgsId is better.
|
Python eval: is it still dangerous if I disable builtins and attribute access?
|
We all know that eval is dangerous, even if you hide dangerous functions, because you can use Python's introspection features to dig down into things and re-extract them. For example, even if you delete __builtins__, you can retrieve them with
[c for c in ().__class__.__base__.__subclasses__()
if c.__name__ == 'catch_warnings'][0]()._module.__builtins__
However, every example I've seen of this uses attribute access. What if I disable all builtins, and disable attribute access (by tokenizing the input with a Python tokenizer and rejecting it if it has an attribute access token)?
And before you ask, no, for my use-case, I do not need either of these, so it isn't too crippling.
What I'm trying to do is make SymPy's sympify function more safe. Currently it tokenizes the input, does some transformations on it, and evals it in a namespace. But it's unsafe because it allows attribute access (even though it really doesn't need it).
|
I'm going to mention one of the new features of Python 3.6 - f-strings.
They can evaluate expressions,
>>> eval('f"{().__class__.__base__}"', {'__builtins__': None}, {})
"<class 'object'>"
but the attribute access won't be detected by Python's tokenizer:
0,0-0,0: ENCODING 'utf-8'
1,0-1,1: ERRORTOKEN "'"
1,1-1,27: STRING 'f"{().__class__.__base__}"'
2,0-2,0: ENDMARKER ''
|
Multi POST query (session mode)
|
I am trying to interrogate this site to get the list of offers.
The problem is that we need to fill 2 forms (2 POST queries) before receiving the final result.
This what I have done so far:
First I am sending the first POST after setting the cookies:
library(httr)
set_cookies(.cookies = c(a = "1", b = "2"))
first_url <- "https://compare.switchon.vic.gov.au/submit"
body <- list(energy_category="electricity",
location="home",
"location-home"="shift",
"retailer-company"="",
postcode="3000",
distributor=7,
zone=1,
energy_concession=0,
"file-provider"="",
solar=0,
solar_feedin_tariff="",
disclaimer_chkbox="disclaimer_selected")
qr<- POST(first_url,
encode="form",
body=body)
Then trying to retrieve the offers using the second post query:
gov_url <- "https://compare.switchon.vic.gov.au/energy_questionnaire/submit"
qr1<- POST(gov_url,
encode="form",
body=list(`person-count`=1,
`room-count`=1,
`refrigerator-count`=1,
`gas-type`=4,
`pool-heating`=0,
spaceheating="none",
spacecooling="none",
`cloth-dryer`=0,
waterheating="other"),
set_cookies(a = 1, b = 2))
)
library(XML)
dc <- htmlParse(qr1)
But unfortunately I get a message indicating the end of session. Many thanks for any help to resolve this.
update add cookies:
I added the cookies and the intermediate GET, but I still don't have any of the results.
library(httr)
first_url <- "https://compare.switchon.vic.gov.au/submit"
body <- list(energy_category="electricity",
location="home",
"location-home"="shift",
"retailer-company"="",
postcode=3000,
distributor=7,
zone=1,
energy_concession=0,
"file-provider"="",
solar=0,
solar_feedin_tariff="",
disclaimer_chkbox="disclaimer_selected")
qr<- POST(first_url,
encode="form",
body=body,
config=set_cookies(a = 1, b = 2))
xx <- GET("https://compare.switchon.vic.gov.au/energy_questionnaire",config=set_cookies(a = 1, b = 2))
gov_url <- "https://compare.switchon.vic.gov.au/energy_questionnaire/submit"
qr1<- POST(gov_url,
encode="form",
body=list(
`person-count`=1,
`room-count`=1,
`refrigerator-count`=1,
`gas-type`=4,
`pool-heating`=0,
spaceheating="none",
spacecooling="none",
`cloth-dryer`=0,
waterheating="other"),
config=set_cookies(a = 1, b = 2))
library(XML)
dc <- htmlParse(qr1)
|
using a python requests.Session object with the following data gets to the results page:
form1 = {"energy_category": "electricity",
"location": "home",
"location-home": "shift",
"distributor": "7",
"postcode": "3000",
"energy_concession": "0",
"solar": "0",
"disclaimer_chkbox": "disclaimer_selected",
}
form2 = {"person-count":"1",
"room-count":"4",
"refrigerator-count":"0",
"gas-type":"3",
"pool-heating":"0",
"spaceheating[]":"none",
"spacecooling[]":"none",
"cloth-dryer":"0",
"waterheating[]":"other"}
sub_url = "https://compare.switchon.vic.gov.au/submit"
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
r = s.get("https://compare.switchon.vic.gov.au/offers")
print(r.content)
You should see the matching h1 in the returned html that you see on the page:
<h1>Your electricity offers</h1>
Or using scrapy form requests:
import scrapy
class Spider(scrapy.Spider):
name = 'comp'
start_urls = ['https://compare.switchon.vic.gov.au/energy_questionnaire/submit']
form1 = {"energy_category": "electricity",
"location": "home",
"location-home": "shift",
"distributor": "7",
"postcode": "3000",
"energy_concession": "0",
"solar": "0",
"disclaimer_chkbox": "disclaimer_selected",
}
sub_url = "https://compare.switchon.vic.gov.au/submit"
form2 = {"person-count":"1",
"room-count":"4",
"refrigerator-count":"0",
"gas-type":"3",
"pool-heating":"0",
"spaceheating[]":"none",
"spacecooling[]":"none",
"cloth-dryer":"0",
"waterheating[]":"other"}
def start_requests(self):
return [scrapy.FormRequest(
self.sub_url,
formdata=form1,
callback=self.parse
)]
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata=form2,
callback=self.after
)
def after(self, response):
print("<h1>Your electricity offers</h1>" in response.body)
Which we can verify has the "<h1>Your electricity offers</h1>":
2016-03-07 12:27:31 [scrapy] DEBUG: Crawled (200) <GET https://compare.switchon.vic.gov.au/offers#list/electricity> (referer: https://compare.switchon.vic.gov.au/energy_questionnaire)
True
2016-03-07 12:27:31 [scrapy] INFO: Closing spider (finished)
The next problem is the actual data is dynamically rendered which you can verify if you look at the source of the results page, you can actually get all the provider in json format:
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
r = s.get("https://compare.switchon.vic.gov.au/service/offers")
print(r.json())
A snippet of which is:
{u'pageMetaData': {u'showDual': False, u'isGas': False, u'showTouToggle': True, u'isElectricityInitial': True, u'showLoopback': False, u'isElectricity': True}, u'offersList': [{u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.peopleenergy.com.au', u'offerId': u'PEO33707SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1410, u'offerType': u'Standing offer', u'offerName': u'Residential 5-Day Time of Use', u'conditionalPrice': 1410, u'fullDiscountedPrice': 1390, u'greenPower': 0, u'retailerName': u'People Energy', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'1300 788 970', u'isPartDual': False, u'retailerId': u'7322', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1645', u'exitFeeCount': 0, u'timeDefinition': u'Local time', u'retailerImageUrl': u'img/retailers/big/peopleenergy.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.peopleenergy.com.au', u'offerId': u'PEO33773SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1500, u'offerType': u'Standing offer', u'offerName': u'Residential Peak Anytime', u'conditionalPrice': 1500, u'fullDiscountedPrice': 1480, u'greenPower': 0, u'retailerName': u'People Energy', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'1300 788 970', u'isPartDual': False, u'retailerId': u'7322', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1649', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/peopleenergy.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.energythatcould.com.au', u'offerId': u'PAC33683SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1400, u'offerType': u'Standing offer', u'offerName': u'Vic Home Flex', u'conditionalPrice': 1400, u'fullDiscountedPrice': 1400, u'greenPower': 0, u'retailerName': u'Pacific Hydro Retail Pty Ltd', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Flexible Pricing', u'retailerPhone': u'1800 010 648', u'isPartDual': False, u'retailerId': u'15902', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1666', u'exitFeeCount': 0, u'timeDefinition': u'Local time', u'retailerImageUrl': u'img/retailers/big/pachydro.jpg'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.energythatcould.com.au', u'offerId': u'PAC33679SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1340, u'offerType': u'Standing offer', u'offerName': u'Vic Home Flex', u'conditionalPrice': 1340, u'fullDiscountedPrice': 1340, u'greenPower': 0, u'retailerName': u'Pacific Hydro Retail Pty Ltd', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'1800 010 648', u'isPartDual': False, u'retailerId': u'15902', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1680', u'exitFeeCount': 0, u'timeDefinition': u'Local time', u'retailerImageUrl': u'img/retailers/big/pachydro.jpg'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 10, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30367MR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': True, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1370, u'offerType': u'Market offer', u'offerName': u'Citipower Commander Residential Market Offer (CE3CPR-MAT1 + PF1/TF1/GF1)', u'conditionalPrice': 1370, u'fullDiscountedPrice': 1160, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': True, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2384', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 10, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30359MR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': True, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1330, u'offerType': u'Market offer', u'offerName': u'Citipower Commander Residential Market Offer (Flexible Pricing (Peak, Shoulder and Off Peak) (CE3CPR-MCFP1 + PF1/TF1/GF1)', u'conditionalPrice': 1330, u'fullDiscountedPrice': 1140, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': True, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2386', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 10, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E33241MR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': True, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1300, u'offerType': u'Market offer', u'offerName': u'Citipower Commander Residential Market Offer (Peak / Off Peak) (CE3CPR-MPK1OP1)', u'conditionalPrice': 1300, u'fullDiscountedPrice': 1100, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': True, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2389', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30379SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1370, u'offerType': u'Standing offer', u'offerName': u'Citipower Commander Residential Standing Offer (CE3CPR-SAT1 + PF1/TF1/GF1)', u'conditionalPrice': 1370, u'fullDiscountedPrice': 1370, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2391', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30369SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1330, u'offerType': u'Standing offer', u'offerName': u'Citipower Commander Residential Standing Offer (Flexible Pricing (Peak, Shoulder and Off Peak) (CE3CPR-SCFP1 + PF1/TF1/GF1)', u'conditionalPrice': 1330, u'fullDiscountedPrice': 1330, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2393', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30375SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1300, u'offerType': u'Standing offer', u'offerName': u'Citipower Commander Residential Standing Offer (Peak / Off Peak) (CE3CPR-SPK1OP1)', u'conditionalPrice': 1300, u'fullDiscountedPrice': 1300, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2395', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.dodo.com/powerandgas', u'offerId': u'DOD32903SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1320, u'offerType': u'Standing offer', u'offerName': u'Citipower Res No Term Standing Offer (Common Form Flex Plan) (E3CPR-SCFP1)', u'conditionalPrice': 1320, u'fullDiscountedPrice': 1320, u'greenPower': 0, u'retailerName': u'Dodo Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType':
Then if you look at the requests later, for example when you click the compare selected button on the results page, there is a request like:
https://compare.switchon.vic.gov.au/service/offer/tariff/9090/9092
So you may be able to mimic what happens by filtering using the tariff or some variation.
You can actually get all the data as json, if you enter the same values as below into the forms:
form1 = {"energy_category": "electricity",
"location": "home",
"location-home": "shift",
"distributor": "7",
"postcode": "3000",
"energy_concession": "0",
"solar": "0",
"disclaimer_chkbox": "disclaimer_selected"
}
form2 = {"person-count":"1",
"room-count":"1",
"refrigerator-count":"1",
"gas-type":"4",
"pool-heating":"0",
"spaceheating[]":"none",
"spacecooling[]":"none",
"cloth-dryer":"0",
"cloth-dryer-freq-weekday":"",
"waterheating[]":"other"}
import json
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
js = s.get("https://compare.switchon.vic.gov.au/service/offers").json()["offersList"]
by_discount = sorted(js, key=lambda d: d["offerDetails"][0]["fullDiscountedPrice"])
If we just pull the first two values from the list ordered by the total discount price:
from pprint import pprint as pp
pp(by_discount[:2])
You will get:
[{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 980,
u'conditionalPrice': 980,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 10,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 1,
u'fullDiscountedPrice': 660,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': True,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': False,
u'offerId': u'GLO40961MR',
u'offerKey': u'7636',
u'offerName': u'GLO SWITCH',
u'offerType': u'Market offer',
u'retailerId': u'31206',
u'retailerImageUrl': u'img/retailers/big/globird.jpg',
u'retailerName': u'GloBird Energy',
u'retailerPhone': u'(03) 8560 4199',
u'retailerUrl': u'http://www.globirdenergy.com.au/switchon/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Single rate',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0},
{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 1080,
u'conditionalPrice': 1080,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 10,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 1,
u'fullDiscountedPrice': 720,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': True,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': True,
u'offerId': u'GLO41009MR',
u'offerKey': u'7642',
u'offerName': u'GLO SWITCH',
u'offerType': u'Market offer',
u'retailerId': u'31206',
u'retailerImageUrl': u'img/retailers/big/globird.jpg',
u'retailerName': u'GloBird Energy',
u'retailerPhone': u'(03) 8560 4199',
u'retailerUrl': u'http://www.globirdenergy.com.au/switchon/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Time of use',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0}]
Which should match what you see on the page when you click the "DISCOUNTED PRICE" filter button.
For the normal view it seems to be ordered by conditionalPrice or basePrice, again pulling just the two first values should match what you see on the webpage:
base = sorted(js, key=lambda d: d["offerDetails"][0]["conditionalPrice"])
from pprint import pprint as pp
pp(base[:2])
[{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 740,
u'conditionalPrice': 740,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 0,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 0,
u'fullDiscountedPrice': 740,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': False,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': False,
u'offerId': u'NEX42694SR',
u'offerKey': u'9092',
u'offerName': u'Citpower Single Rate Residential',
u'offerType': u'Standing offer',
u'retailerId': u'35726',
u'retailerImageUrl': u'img/retailers/big/nextbusinessenergy.jpg',
u'retailerName': u'Next Business Energy Pty Ltd',
u'retailerPhone': u'1300 466 398',
u'retailerUrl': u'http://www.nextbusinessenergy.com.au/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Single rate',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0},
{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 780,
u'conditionalPrice': 780,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 0,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 0,
u'fullDiscountedPrice': 780,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': False,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': False,
u'offerId': u'NEX42699SR',
u'offerKey': u'9090',
u'offerName': u'Citpower Residential Flexible Pricing',
u'offerType': u'Standing offer',
u'retailerId': u'35726',
u'retailerImageUrl': u'img/retailers/big/nextbusinessenergy.jpg',
u'retailerName': u'Next Business Energy Pty Ltd',
u'retailerPhone': u'1300 466 398',
u'retailerUrl': u'http://www.nextbusinessenergy.com.au/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Flexible Pricing',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0}]
You can see all the json returned in firebug console if you click the https://compare.switchon.vic.gov.au/service/offers get entry then hit response:
You should be able to pull each field that you want from that.
The output actually has a few extras results which you don't see on the page unless you toggle the tou button below:
You can filter those from the results so you exactly match the default output or give an option to include with a helper function:
def order_by(l, k, is_tou=False):
if not is_tou:
filt = filter(lambda x: not x["offerDetails"][0]["isTouOffer"], l)
return sorted(filt, key=lambda d: d["offerDetails"][0][k])
return sorted(l, key=lambda d: d["offerDetails"][0][k])
import json
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
js = s.get("https://compare.switchon.vic.gov.au/service/offers").json()["offersList"]
by_price = by_discount(js, "conditionalPrice", False)
print(by_price[:3)
If you check the output you will see origin energy third with a price of 840 in the results with the switch on or 860 for AGL when it is off, you can apply the same to the discount output:
The regular output also seems to be ordered by conditionalPrice if you check the source the two js functions that get called for ordering are:
ng-click="changeSortingField('conditionalPrice')"
ng-click="changeSortingField('fullDiscountedPrice')"
So that should now definitely completely match the site output.
|
sine calculation orders of magnitude slower than cosine
|
tl;dr
Of the same numpy array, calculating np.cos takes 3.2 seconds, wheras np.sin runs 548 seconds (nine minutes) on Linux Mint.
See this repo for full code.
I've got a pulse signal (see image below) which I need to modulate onto a HF-carrier, simulating a Laser Doppler Vibrometer. Therefore signal and its time basis need to be resampled to match the carrier's higher sampling rate.
In the following demodulation process both the in-phase carrier cos(omega * t) and the phase-shifted carrier sin(omega * t) are needed.
Oddly, the time to evaluate these functions depends highly on the way the time vector has been calculated.
The time vector t1 is being calculated using np.linspace directly, t2 uses the method implemented in scipy.signal.resample.
pulse = np.load('data/pulse.npy') # 768 samples
pulse_samples = len(pulse)
pulse_samplerate = 960 # 960 Hz
pulse_duration = pulse_samples / pulse_samplerate # here: 0.8 s
pulse_time = np.linspace(0, pulse_duration, pulse_samples,
endpoint=False)
carrier_freq = 40e6 # 40 MHz
carrier_samplerate = 100e6 # 100 MHz
carrier_samples = pulse_duration * carrier_samplerate # 80 million
t1 = np.linspace(0, pulse_duration, carrier_samples)
# method used in scipy.signal.resample
# https://github.com/scipy/scipy/blob/v0.17.0/scipy/signal/signaltools.py#L1754
t2 = np.arange(0, carrier_samples) * (pulse_time[1] - pulse_time[0]) \
* pulse_samples / float(carrier_samples) + pulse_time[0]
As can be seen in the picture below, the time vectors are not identical. At 80 million samples the difference t1 - t2 reaches 1e-8.
Calculating the in-phase and shifted carrier of t1 takes 3.2 seconds each on my machine.
With t2, however, calculating the shifted carrier takes 540 seconds. Nine minutes. For nearly the same 80 million values.
omega_t1 = 2 * np.pi * carrier_frequency * t1
np.cos(omega_t1) # 3.2 seconds
np.sin(omega_t1) # 3.3 seconds
omega_t2 = 2 * np.pi * carrier_frequency * t2
np.cos(omega_t2) # 3.2 seconds
np.sin(omega_t2) # 9 minutes
I can reproduce this bug on both my 32-bit laptop and my 64-bit tower, both running Linux Mint 17. On my flat mate's MacBook, however, the "slow sine" takes as little time as the other three calculations.
I run a Linux Mint 17.03 on a 64-bit AMD processor and Linux Mint 17.2 on 32-bit Intel processor.
|
I don't think numpy has anything to do with this: I think you're tripping across a performance bug in the C math library on your system, one which affects sin near large multiples of pi. (I'm using "bug" in a pretty broad sense here -- for all I know, since the sine of large floats is poorly defined, the "bug" is actually the library behaving correctly to handle corner cases!)
On linux, I get:
>>> %timeit -n 10000 math.sin(6e7*math.pi)
10000 loops, best of 3: 191 µs per loop
>>> %timeit -n 10000 math.sin(6e7*math.pi+0.12)
10000 loops, best of 3: 428 ns per loop
and other Linux-using types from the Python chatroom report
10000 loops, best of 3: 49.4 µs per loop
10000 loops, best of 3: 206 ns per loop
and
In [3]: %timeit -n 10000 math.sin(6e7*math.pi)
10000 loops, best of 3: 116 µs per loop
In [4]: %timeit -n 10000 math.sin(6e7*math.pi+0.12)
10000 loops, best of 3: 428 ns per loop
but a Mac user reported
In [3]: timeit -n 10000 math.sin(6e7*math.pi)
10000 loops, best of 3: 300 ns per loop
In [4]: %timeit -n 10000 math.sin(6e7*math.pi+0.12)
10000 loops, best of 3: 361 ns per loop
for no order-of-magnitude difference. As a workaround, you might try taking things mod 2 pi first:
>>> new = np.sin(omega_t2[-1000:] % (2*np.pi))
>>> old = np.sin(omega_t2[-1000:])
>>> abs(new - old).max()
7.83773902468434e-09
which has better performance:
>>> %timeit -n 1000 new = np.sin(omega_t2[-1000:] % (2*np.pi))
1000 loops, best of 3: 63.8 µs per loop
>>> %timeit -n 1000 old = np.sin(omega_t2[-1000:])
1000 loops, best of 3: 6.82 ms per loop
Note that as expected, a similar effect happens for cos, just shifted:
>>> %timeit -n 1000 np.cos(6e7*np.pi + np.pi/2)
1000 loops, best of 3: 37.6 µs per loop
>>> %timeit -n 1000 np.cos(6e7*np.pi + np.pi/2 + 0.12)
1000 loops, best of 3: 2.46 µs per loop
|
`object in list` behaves different from `object in dict`?
|
I've got an iterator with some objects in it and I wanted to create a collection of uniqueUsers in which I only list every user once. So playing around a bit I tried it with both a list and a dict:
>>> for m in ms: print m.to_user # let's first look what's inside ms
...
Pete Kramer
Pete Kramer
Pete Kramer
>>>
>>> uniqueUsers = [] # Create an empty list
>>> for m in ms:
... if m.to_user not in uniqueUsers:
... uniqueUsers.append(m.to_user)
...
>>> uniqueUsers
[Pete Kramer] # This is what I would expect
>>>
>>> uniqueUsers = {} # Now let's create a dict
>>> for m in ms:
... if m.to_user not in uniqueUsers:
... uniqueUsers[m.to_user] = 1
...
>>> uniqueUsers
{Pete Kramer: 1, Pete Kramer: 1, Pete Kramer: 1}
So I tested it by converting the dict to a list when doing the if statement, and that works as I would expect it to:
>>> uniqueUsers = {}
>>> for m in ms:
... if m.to_user not in list(uniqueUsers):
... uniqueUsers[m.to_user] = 1
...
>>> uniqueUsers
{Pete Kramer: 1}
and I can get a similar result by testing against uniqueUsers.keys().
The thing is that I don't understand why this difference occurs. I always thought that if you do if object in dict, it simply creates a list of the dicts keys and tests agains that, but that's obviously not the case.
Can anybody explain how object in dict internally works and why it doesn't behave similar to object in list (as I would expect it to)?
|
In order to understand whatâs going on, you have to understand how the in operator, the membership test, behaves for the different types.
For lists, this is pretty simple due to what lists fundamentally are: Ordered arrays that do not care about duplicates. The only possible way to peform a membership test here is to iterate over the list and check every item on equality. Something like this:
# x in lst
for item in lst:
if x == item:
return True
return False
Dictionaries are a bit different: They are hash tables were keys are meant to be unique. Hash tables require the keys to be hashable which essentially means that there needs to be an explicit function that converts the object into an integer. This hash value is then used to put the key/value mapping somewhere into the hash table.
Since the hash value determines where in the hash table an item is placed, itâs critical that objects which are meant to be identical produce the same hash value. So the following implication has to be true: x == y => hash(x) == hash(y). The reverse does not need to be true though; itâs perfectly valid to have different objects produce the same hash value.
When a membership test on a dictionary is performed, then the dictionary will first look for the hash value. If it can find it, then it will perform an equality check on all items it found; if it didnât find the hash value, then it assumes that itâs a different object:
# x in dct
h = hash(x)
items = getItemsForHash(dct, h)
for item in items:
if x == item:
return True
# items is empty, or no match inside the loop
return False
Since you get the desired result when using a membership test against a list, that means that your object implements the equality comparison (__eq__) correctly. But since you do not get the correct result when using a dictionary, there seems to be a __hash__ implementation that is out of sync with the equality comparison implementation:
>>> class SomeType:
def __init__ (self, x):
self.x = x
def __eq__ (self, other):
return self.x == other.x
def __hash__ (self):
# bad hash implementation
return hash(id(self))
>>> l = [SomeType(1)]
>>> d = { SomeType(1): 'x' }
>>> x = SomeType(1)
>>> x in l
True
>>> x in d
False
Note that for new-style classes in Python 2 (classes that inherit from object), this âbad hash implementationâ (which is based on the object id) is the default. So when you do not implement your own __hash__ function, it still uses that one. This ultimately means that unless your __eq__ only performs an identity check (the default), the hash function will be out of sync.
So the solution is to implement __hash__ in a way that it aligns with the rules used in __eq__. For example, if you compare two members self.x and self.y, then you should use a compound hash over those two members. The easiest way to do that is to return the hash value of a tuple of those values:
class SomeType (object):
def __init__ (self, x, y):
self.x = x
self.y = y
def __eq__ (self, other):
return self.x == other.x and self.y == other.y
def __hash__ (self):
return hash((self.x, self.y))
Note that you should not make an object hashable if it is mutable:
If a class defines mutable objects and implements an __eq__() method, it should not implement __hash__(), since the implementation of hashable collections requires that a keyâs hash value is immutable (if the objectâs hash value changes, it will be in the wrong hash bucket).
|
Python 3: super() raises TypeError unexpectedly
|
Coming from Java, I'm struggling a bit getting down inheritance, abstract classes, static methods and similar concepts of OO programming in Python.
I have an implementation of an expression tree class, given (simplified) by
# Generic node class
class Node(ABC):
@abstractmethod
def to_expr(self):
pass
@staticmethod
def bracket_complex(child):
s = child.to_expr()
return s if isinstance(child, Leaf) or isinstance(child, UnaryOpNode) else "(" + s + ")"
# Leaf class - used for values and variables
class Leaf(Node):
def __init__(self, val):
self.val = val
def to_expr(self):
return str(self.val)
# Unary operator node
class UnaryOpNode(Node):
def __init__(self, op, child):
self.op = op
self.child = child
def to_expr(self):
return str(self.op) + super().bracket_complex(self.child)
# Binary operator node
class BinaryOpNode(Node):
def __init__(self, op, lchild, rchild):
self.op = op
self.lchild = lchild
self.rchild = rchild
def to_expr(self):
return super().bracket_complex(self.lchild) + " " + str(self.op) + " " + super().bracket_complex(self.rchild)
# Variadic operator node (arbitrary number of arguments)
# Assumes commutative operator
class VariadicOpNode(Node):
def __init__(self, op, list_):
self.op = op
self.children = list_
def to_expr(self):
return (" " + str(self.op) + " ").join(super().bracket_complex(child) for child in self.children)
The method to_expr() works fine when called on instances of Leaf, UnaryOpNode and BinaryOpNode, but raises a TypeError when called on an instance of VariadicOpNode:
TypeError: super(type, obj): obj must be an instance or subtype of type
What am I doing wrong in that specific class that super() is suddenly not working?
In Java the static method would get inherited so I wouldn't even need the super call, but in Python this does not seem to be the case.
|
You're using super() without arguments in a generator expression. The super() is magic - it relies on information in the caller frame. Since the generator expression creates an additional function, super() without arguments does not work there. However since your superclass is not probable to change in the middle of execution of a method, you can move it out of the generator expression - this should also speed things up:
def to_expr(self):
bracket_complex = super().bracket_complex
return (" " + str(self.op) + " ").join(bracket_complex(child) for child in self.children)
However as static methods are "inherited" in Python, you could call the super method via self provided that you didn't override it in a subclass. Thus in this simple case you can write:
def to_expr(self):
return (" " + str(self.op) + " ").join(self.bracket_complex(child) for child in self.children)
The implementation detail is that if no arguments are provided, the first argument shall be the value that is in the __class__ cell of the caller frame, and second shall be the first argument given to the caller function. Usually you just get a SystemError when using super in a wrong place, but generator expressions are wrapped inside an implicit generator function which creates another call frame. Unfortunately this function gets an argument, which leads the super() to complain with this exception.
So normally super() would be passed Foo there as the first argument, but within generator expression, a generator object was passed - and thus it is obvious that TypeError needs to be raised.
|
Is there a one line code to find maximal value in a matrix?
|
To find the maximal value in a matrix of numbers, we can code 5 lines to solve the problem:
ans = matrix[0][0]
for x in range(len(matrix)):
for y in range(len(matrix[0])):
ans = max(ans, matrix[x][y])
return ans
Is there a one line solution for this problem?
The one that I came up with is pretty awkward actually:
return max(max(matrix, key=max))
or
return max(map(max, matrix))
|
You can use generator expression to find the maximum in your matrix. That way you can avoid building the full list of matrix elements in memory.
maximum = max(max(row) for row in matrix)
instead of list comprehension as given in a previous answer here
maximum = max([max(row) for row in matrix])
This is from PEP (the rationale section):
...many of the use cases do not need to have a full list created in
memory. Instead, they only need to iterate over the elements one at a
time.
...
Generator expressions are especially useful with functions like sum(), min(), and max() that reduce an iterable input to a single value
...
The utility of generator expressions is greatly enhanced when combined with reduction functions like sum(), min(), and max().
Also, take a look at this SO post: Generator Expressions vs. List Comprehension.
|
Efficiently count word frequencies in python
|
I'd like to count frequencies of all words in a text file.
>>> countInFile('test.txt')
should return {'aaa':1, 'bbb': 2, 'ccc':1} if the target text file is like:
# test.txt
aaa bbb ccc
bbb
I've implemented it with pure python following some posts. However, I've found out pure-python ways are insufficient due to huge file size (> 1GB).
I think borrowing sklearn's power is a candidate.
If you let CountVectorizer count frequencies for each line, I guess you will get word frequencies by summing up each column. But, it sounds a bit indirect way.
What is the most efficient and straightforward way to count words in a file with python?
Update
My (very slow) code is here:
from collections import Counter
def get_term_frequency_in_file(source_file_path):
wordcount = {}
with open(source_file_path) as f:
for line in f:
line = line.lower().translate(None, string.punctuation)
this_wordcount = Counter(line.split())
wordcount = add_merge_two_dict(wordcount, this_wordcount)
return wordcount
def add_merge_two_dict(x, y):
return { k: x.get(k, 0) + y.get(k, 0) for k in set(x) | set(y) }
|
The most succinct approach is to use the tools Python gives you.
from future_builtins import map # Only on Python 2
from collections import Counter
from itertools import chain
def countInFile(filename):
with open(filename) as f:
return Counter(chain.from_iterable(map(str.split, f)))
That's it. map(str.split, f) is making a generator that returns lists of words from each line. Wrapping in chain.from_iterable converts that to a single generator that produces a word at a time. Counter takes an input iterable and counts all unique values in it. At the end, you return a dict-like object (a Counter) that stores all unique words and their counts, and during creation, you only store a line of data at a time and the total counts, not the whole file at once.
In theory, you might do slightly better looping over the chained results yourself and using a dict or collections.defaultdict(int) to count (because Counter is implemented in Python, which can make it slower in some cases), but letting Counter do the work is simpler and more self-documenting (I mean, the whole goal is counting, so use a Counter).
Update: You seem to want punctuation stripped and case-insensitivity, so here's a variant of my earlier code that does that:
from string import punctuation
def countInFile(filename):
with open(filename) as f:
linewords = (line.translate(None, punctuation).lower().split() for line in f)
return Counter(chain.from_iterable(linewords))
Your code runs much more slowly because it's creating and destroying many small Counter and set objects, rather than .update-ing a single Counter once per line (which, while slightly slower than what I gave in the updated code block, would be at least algorithmically similar in scaling factor).
|
Create empty conda environment
|
I can create a new conda environment, with program biopython with this:
conda create --name snowflakes biopython
What if I do not want to install any program? It seems I can not do that:
» conda create --name tryout
Error: too few arguments, must supply command line package specs or --file
You can specify one or more default packages to install when creating
an environment. Doing so allows you to call conda create without
explicitly providing any package names.
To set the provided packages, call conda config like this:
conda config --add create_default_packages PACKAGE_NAME
|
You can give a package name of just "python" to get a base, empty install.
conda create --name myenv python
conda create --name myenv python=3.4
|
Cleanest way to obtain the numeric prefix of a string
|
What is the cleanest way to obtain the numeric prefix of a string in Python?
By "clean" I mean simple, short, readable. I couldn't care less about performance, and I suppose that it is hardly measurable in Python anyway.
For example:
Given the string '123abc456def', what is the cleanest way to obtain the string '123'?
The code below obtains '123456':
input = '123abc456def'
output = ''.join(c for c in input if c in '0123456789')
So I am basically looking for some way to replace the if with a while.
|
You can use itertools.takewhile which will iterate over your string (the iterable argument) until it encounters the first item which returns False (by passing to predictor function):
>>> from itertools import takewhile
>>> input = '123abc456def'
>>> ''.join(takewhile(str.isdigit, input))
'123'
|
how to print 3x3 array in python?
|
I need to print a 3 x 3 array for a game called TicTackToe.py. I know we can print stuff from a list in a horizontal or vertical way by using
listA=['a','b','c','d','e','f','g','h','i','j']
# VERTICAL PRINTING
for item in listA:
print item
Output:
a
b
c
or
# HORIZONTAL PRINTING
for item in listA:
print item,
Output:
a b c d e f g h i j
How can I print a mix of both, e.g. printing a 3x3 box
like
a b c
d e f
g h i
|
You can enumerate the items, and print a newline only every third item:
for index, item in enumerate('abcdefghij', start=1):
print item,
if not index % 3:
print
Output:
a b c
d e f
g h i
j
enumerate starts counting from zero by default, so I set start=1.
As @arekolek comments, if you're using Python 3, or have imported the print function from the future for Python 2, you can specify the line ending all in one go, instead of the two steps above:
for index, item in enumerate('abcdefghij', start=1):
print(item, end=' ' if index % 3 else '\n')
|
Python - create an EXE that runs code as written, not as it was when compiled
|
I'm making a pygame program that is designed to be modular. I am building an exe with pygame2exe of the file main.py, which basically just imports the real main game and runs it. What I'm hoping for is a sort of launcher that will execute Python scripts from an EXE, rather than a single program containing all immutable files.
What is the best way to go about this? I've tried using imp to dynamically import all modules at runtime instead of implicitly importing them, but that seems to break object inheritance.
|
After some experiments I've found a solution.
Create a separate folder source in the main folder of the application. Here will be placed source files. Also place file __init__.py to the folder. Lets name a main file like main_module.py.
Add all of its contents as a data files to the py2exe configuration setup.py. Now after compiling the program, these files will be placed in the dist folder.
data_files += [('source', glob('source/*.py'),)]
setup(
data_files=data_files,
.... # other options
windows=[
{
"script": "launcher.py",
"icon_resources": [(0, "resources/favicon.ico")]
}
)
Make launcher.py - it's task is to import all system and required libraries like pygame, pyqt and so on. Then run you program:
import sys, time, os, hashlib, atexit # std modules
import PyQt5, ... # foreign libraries
sys.path.insert(0, 'source')
exec('import main_module')
Now main_module.py will be imported, if it imports your modules, they will be imported too in their places in hierarchy. For example head of the main_module.py can be like this:
import user_tweaks
from user_data import parser
These files user_tweaks.py and user_data.py should be located in source folder at appropriate paths relative to main_module.py.
You may change contents of source folder without recompilation program itself. Any time program runs it uses fresh contents of source.
As a result you have an application folder with:
A separate launcher - simple .exe file
All required modules
Your application with all its modules.
|
Disable Tensorflow Debugging information
|
By debugging information I mean what TensorFlow shows in my terminal about loaded libraries and found devices etc. not the python errors.
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Graphics Device
major: 5 minor: 2 memoryClockRate (GHz) 1.0885
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.83GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Graphics Device, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
...
|
There currently isn't a way of suppressing/controlling logging in TensorFlow
EDIT: View the page below for information on TensorFlow logging; with the new update, you're able to set the logging verbosity to either DEBUG, INFO, WARN, ERROR, or FATAL. For example:
tf.logging.set_verbosity(tf.logging.ERROR)
The page additionally goes over monitors which can be used with SKFlow models. Here is the page.
This doesn't block all logging, though. I have two solutions; one is a 'technically correct' solution (Linux) and the other involves rebuilding TensorFlow.
script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'
For the other, please see this answer which involves modifying source and rebuilding TensorFlow.
|
Python Iterate through list of list to make a new list in index sequence
|
How would you iterate through a list of lists, such as:
[[1,2,3,4], [5,6], [7,8,9]]
and construct a new list by grabbing the first item of each list, then the second, etc. So the above becomes this:
[1, 5, 7, 2, 6, 8, 3, 9, 4]
|
You can use a list comprehension along with itertools.izip_longest (or zip_longest in Python 3)
from itertools import izip_longest
a = [[1,2,3,4], [5,6], [7,8,9]]
[i for sublist in izip_longest(*a) for i in sublist if i is not None]
# [1, 5, 7, 2, 6, 8, 3, 9, 4]
|
Error running basic tensorflow example
|
I have just reinstalled latest tensorflow on ubuntu:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
[sudo] password for ubuntu:
The directory '/home/ubuntu/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/ubuntu/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting tensorflow==0.7.1 from https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
Downloading https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl (13.8MB)
100% |ââââââââââââââââââââââââââââââââ| 13.8MB 32kB/s
Requirement already up-to-date: six>=1.10.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: protobuf==3.0.0b2 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: wheel in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: numpy>=1.8.2 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf==3.0.0b2->tensorflow==0.7.1)
Installing collected packages: tensorflow
Found existing installation: tensorflow 0.7.1
Uninstalling tensorflow-0.7.1:
Successfully uninstalled tensorflow-0.7.1
Successfully installed tensorflow-0.7.1
When following the directions to test it fails with cannot import name pywrap_tensorflow:
$ ipython
/git/tensorflow/tensorflow/__init__.py in <module>()
21 from __future__ import print_function
22
---> 23 from tensorflow.python import *
/git/tensorflow/tensorflow/python/__init__.py in <module>()
43 _default_dlopen_flags = sys.getdlopenflags()
44 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL)
---> 45 from tensorflow.python import pywrap_tensorflow
46 sys.setdlopenflags(_default_dlopen_flags)
47
ImportError: cannot import name pywrap_tensorflow
Is there an additional change needed to my python or ubuntu/bash environment?
|
From the path in your stack trace (/git/tensorflow/tensorflow/â¦), it looks like your Python path may be loading the tensorflow libraries from the source directory, rather than the version that you have installed. As a result, it is unable to find the (compiled) pywrap_tensorflow library, which is installed in a different directory.
A common solution is to cd out of the /git/tensorflow directory before starting python or ipython.
|
How to generate multiple plots by clicking a single plot for more infomation using clickable python events
|
I am in the process of developing an application which can generate a 2nd plot by clicking data point in the 1st plot. I am using events to accomplish this.
Question:
How to generate a 3rd plot by clicking a 2nd plot data point? Is it possible to accomplish to this?
How to generate a simpler 3 layer synthetic data?
Code I have so far:
"""
compute the mean and stddev of 100 data sets and plot mean vs stddev.
When you click on one of the mu, sigma points, plot the raw data from
the dataset that generated the mean and stddev
"""
import numpy as np
import matplotlib.pyplot as plt
X = np.random.rand(100, 1000)
xs = np.mean(X, axis=1)
ys = np.std(X, axis=1)
fig = plt.figure()
ax = fig.add_subplot(211)
bx = fig.add_subplot(212)
# ax.set_title('click on point to plot time series')
# bx.set_title('click on point to plot time series')
line, = ax.plot(xs, ys, 'bs', picker=5) # 5 points tolerance
line1, = bx.plot(xs, ys, 'bo', picker=5) # 5 points tolerance
def onpick(event):
if event.artist!=line: return True
N = len(event.ind)
if not N: return True
figi = plt.figure()
for subplotnum, dataind in enumerate(event.ind):
ax = figi.add_subplot(N,1,subplotnum+1)
ax.plot(X[dataind])
ax.text(0.05, 0.9, 'mu=%1.3f\nsigma=%1.3f'%(xs[dataind], ys[dataind]),
transform=ax.transAxes, va='top')
ax.set_ylim(-0.5, 1.5)
bx = figi.add_subplot(N,1,subplotnum+1)
bx.plot(X[dataind])
bx.text(0.05, 0.9, 'mu=%1.3f\nsigma=%1.3f'%(xs[dataind], ys[dataind]),
transform=ax.transAxes, va='top')
bx.set_ylim(-0.5, 1.5)
figi.show()
return True
fig.canvas.mpl_connect('pick_event', onpick)
plt.show()
By clicking data points on the first plot, 2nd plot will be generated.
Now expectations are to generate a 3rd plot clicking the 2nd plot data point.
I have researched on this, but I am not successful :-( :-(
If my approach is wrong, please do suggest the any other alternative.
Thanks in advance!!!
|
synthetic 3 level data
import matplotlib.pyplot as plt
import numpy as np
# data source
data_bucket = {}
# l1: randn
# l2: sum(l1)
# l3: sum(l2)
# generate some 3 layer synthetic data
N = 1000
l1_count = 50
l2_count = 50
l3_count = 2
x = np.arange(N)
for j in range(l3_count):
l3 = []
for k in range(l2_count):
l2 = []
for m in range(l1_count):
l1 = data_bucket[(j, k, m)] = np.random.randn(N)
l2.append(np.sum(l1))
l2 = data_bucket[(j, k)] = np.asarray(l2)
l3.append(np.sum(l2))
data_bucket[(j, )] = np.asarary(l3)
The picker function
# only hase to close over the data
def picker(event):
print(event.artist.get_gid())
print(event.ind)
# some function to generate next data key from gid + index
k = event.artist.get_gid() + tuple(event.ind)
try:
next_data = data_bucket[k]
except KeyError:
print("no generations of data left")
return
# make the previous generation of plot
fig, ax = plt.subplots()
# can use the key here to dispatch to different plotting functions
ax.plot(next_data, gid=k, picker=5, ls='-')
fig.canvas.mpl_connect('pick_event', picker)
Initial plot
fig, ax = plt.subplots()
for k in range(l3_count):
k = (k, )
ax.plot(data_bucket[k], gid=k, picker=5, ls='', marker='o')
fig.canvas.mpl_connect('pick_event', picker)
The tricky part of this managing the mapping between the data in the 'current' figure and the next layer of data. All mpl artists have a gid attribute which can be used to uniquely identify them so here I use that + the index to generate keys into a dictionary which are tuples of integers of varying length. This was just the first thing that popped into my head when trying to make synthetic 3-layer data. In principle any keying system that uses the gid of the picked artist + the index in that line -> next layer of data will work.
You can then use the same picker function for all of your figures and it only needs to close over the data source. All of this could (should?) be rolled up into a single class.
|
Python pip install gives "Command "python setup.py egg_info" failed with error code 1"
|
I'm new to python and have been trying to install some packages with pip. I always get this error message though:
"Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\"
As an example this is with the package "unroll".
Any suggestions?
Benjamin
|
About the error code
According to python documentation
This module makes available standard errno system symbols. The value of each symbol is the corresponding integer value. The names and descriptions are borrowed from linux/include/errno.h, which should be pretty all-inclusive.
Error code 1 is defined in errno.h and means Operation not permitted.
About your error
setuptools seem to be not installed. Juste follow the Installation Instructions from the pypi website.
If it's already installed, try
pip install --upgrade setuptools
If it's already up to date, check that the module ez_setup is not missing. If it is, then
pip install ez_setup
Then try again
pip install unroll
|
Edit the value of every Nth item in a list
|
What's the most pythonic way of performing an arithmetic operation on every nth value in a list? For example, if I start with list1:
list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
I would like to add 1 to every second item, which would give:
list2 = [1, 3, 3, 5, 5, 7, 7, 9, 9, 11]
I've tried:
list1[::2]+1
and also:
for x in list1:
x=2
list2 = list1[::x] + 1
|
You could use slicing with a list comprehension as follows:
In [26]: list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
In [27]: list1[1::2] = [x+1 for x in list1[1::2]]
In [28]: list1
Out[28]: [1, 3, 3, 5, 5, 7, 7, 9, 9, 11]
|
What is a DynamicClassAttribute and how do I use it?
|
As of Python 3.4, there is a descriptor called DynamicClassAttribute. The documentation states:
types.DynamicClassAttribute(fget=None, fset=None, fdel=None, doc=None)
Route attribute access on a class to __getattr__.
This is a descriptor, used to define attributes that act differently when accessed through an instance and through a class. Instance access remains normal, but access to an attribute through a class will be routed to the classâs __getattr__ method; this is done by raising AttributeError.
This allows one to have properties active on an instance, and have virtual attributes on the class with the same name (see Enum for an example).
New in version 3.4.
It is apparently used in the enum module:
# DynamicClassAttribute is used to provide access to the `name` and
# `value` properties of enum members while keeping some measure of
# protection from modification, while still allowing for an enumeration
# to have members named `name` and `value`. This works because enumeration
# members are not set directly on the enum class -- __getattr__ is
# used to look them up.
@DynamicClassAttribute
def name(self):
"""The name of the Enum member."""
return self._name_
@DynamicClassAttribute
def value(self):
"""The value of the Enum member."""
return self._value_
I realise that enums are a little special, but I don't understand how this relates to the DynamicClassAttribute. What does it mean that those attributes are dynamic, how is this different from a normal property, and how do I use a DynamicClassAttribute to my advantage?
|
New Version:
I was a bit disappointed with the previous answer so I decided to rewrite it a bit:
First have a look at the source code of DynamicClassAttribute and you'll probably notice, that it looks very much like the normal property. Except for the __get__-method:
def __get__(self, instance, ownerclass=None):
if instance is None:
# Here is the difference, the normal property just does: return self
if self.__isabstractmethod__:
return self
raise AttributeError()
elif self.fget is None:
raise AttributeError("unreadable attribute")
return self.fget(instance)
So what this means is that if you want to access a DynamicClassAttribute (that isn't abstract) on the class it raises an AttributeError instead of returning self. For instances if instance: evaluates to True and the __get__ is identical to property.__get__.
For normal classes that just resolves in a visible AttributeError when calling the attribute:
from types import DynamicClassAttribute
class Fun():
@DynamicClassAttribute
def has_fun(self):
return False
Fun.has_fun
AttributeError - Traceback (most recent call last)
that for itself is not very helpful until you take a look at the "Class attribute lookup" procedure when using metaclasses (I found a nice image of this in this blog).
Because in case that an attribute raises an AttributeError and that class has a metaclass python looks at the metaclass.__getattr__ method and sees if that can resolve the attribute. To illustrate this with a minimal example:
from types import DynamicClassAttribute
# Metaclass
class Funny(type):
def __getattr__(self, value):
print('search in meta')
# Normally you would implement here some ifs/elifs or a lookup in a dictionary
# but I'll just return the attribute
return Funny.dynprop
# Metaclasses dynprop:
dynprop = 'Meta'
class Fun(metaclass=Funny):
def __init__(self, value):
self._dynprop = value
@DynamicClassAttribute
def dynprop(self):
return self._dynprop
And here comes the "dynamic" part. If you call the dynprop on the class it will search in the meta and return the meta's dynprop:
Fun.dynprop
which prints:
search in meta
'Meta'
So we invoked the metaclass.__getattr__ and returned the original attribute (which was defined with the same name as the new property).
While for instances the dynprop of the Fun-instance is returned:
Fun('Not-Meta').dynprop
we get the overriden attribute:
'Not-Meta'
My conclusion from this is, that DynamicClassAttribute is important if you want to allow subclasses to have an attribute with the same name as used in the metaclass. You'll shadow it on instances but it's still accessible if you call it on the class.
I did go into the behaviour of Enum in the old version so I left it in here:
Old Version
The DynamicClassAttribute is just useful (I'm not really sure on that point) if you suspect there could be naming conflicts between an attribute that is set on a subclass and a property on the base-class.
You'll need to know at least some basics about metaclasses, because this will not work without using metaclasses (a nice explanation on how class attributes are called can be found in this blog post) because the attribute lookup is slightly different with metaclasses.
Suppose you have:
class Funny(type):
dynprop = 'Very important meta attribute, do not override'
class Fun(metaclass=Funny):
def __init__(self, value):
self._stub = value
@property
def dynprop(self):
return 'Haha, overridden it with {}'.format(self._stub)
and then call:
Fun.dynprop
property at 0x1b3d9fd19a8
and on the instance we get:
Fun(2).dynprop
'Haha, overridden it with 2'
bad ... it's lost. But wait we can use the metaclass special lookup: Let's implement an __getattr__ (fallback) and implement the dynprop as DynamicClassAttribute. Because according to it's documentation that's its purpose - to fallback to the __getattr__ if it's called on the class:
from types import DynamicClassAttribute
class Funny(type):
def __getattr__(self, value):
print('search in meta')
return Funny.dynprop
dynprop = 'Meta'
class Fun(metaclass=Funny):
def __init__(self, value):
self._dynprop = value
@DynamicClassAttribute
def dynprop(self):
return self._dynprop
now we access the class-attribute:
Fun.dynprop
which prints:
search in meta
'Meta'
So we invoked the metaclass.__getattr__ and returned the original attribute (which was defined with the same name as the new property).
And for instances:
Fun('Not-Meta').dynprop
we get the overriden attribute:
'Not-Meta'
Well that's not too bad considering we can reroute using metaclasses to previously defined but overriden attributes without creating an instance. This example is the opposite that is done with Enum, where you define attributes on the subclass:
from enum import Enum
class Fun(Enum):
name = 'me'
age = 28
hair = 'brown'
and want to access these afterwards defined attributes by default.
Fun.name
# <Fun.name: 'me'>
but you also want to allow accessing the name attribute that was defined as DynamicClassAttribute (which returns which name the variable actually has):
Fun('me').name
# 'name'
because otherwise how could you access the name of 28?
Fun.hair.age
# <Fun.age: 28>
# BUT:
Fun.hair.name
# returns 'hair'
See the difference? Why does the second one don't return <Fun.name: 'me'>? That's because of this use of DynamicClassAttribute. So you can shadow the original property but "release" it again later. This behaviour is the reverse of that shown in my example and requires at least the usage of __new__ and __prepare__. But for that you need to know how that exactly works and is explained in a lot of blogs and stackoverflow-answers that can explain it much better than I can so I'll skip going into that much depth (and I'm not sure if I could solve it in short order).
Actual use-cases might be sparse but given time one can propably think of some...
Very nice discussion on the documentation of DynamicClassAttribute: "we added it because we needed it"
|
Is there a need to close files that have no reference to them?
|
As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.
in_file = open(from_file)
indata = in_file.read()
out_file = open(to_file, 'w')
out_file.write(indata)
out_file.close()
in_file.close()
I have tried to shorten this code and came up with this:
indata = open(from_file).read()
open(to_file, 'w').write(indata)
This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in_file and out_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?
Any help that sheds some light on this topic is much appreciated.
|
The pythonic way to deal with this is to use the with context manager:
with open(from_file) as in_file, open(to_file, 'w') as out_file:
indata = in_file.read()
out_file.write(indata)
Used with files like this, with will ensure all the necessary cleanup is done for you, even if read() or write() throw errors.
|
Mysterious exceptions when making many concurrent requests from urllib.request to HTTPServer
|
I am trying to do this Matasano crypto challenge that involves doing a timing attack against a server with an artificially slowed-down string comparison function. It says to use "the web framework of your choosing", but I didn't feel like installing a web framework, so I decided to use the HTTPServer class built into the http.server module.
I came up with something that worked, but it was very slow, so I tried to speed it up using the (poorly-documented) thread pool built into multiprocessing.dummy. It was much faster, but I noticed something strange: if I make 8 or fewer requests concurrently, it works fine. If I have more than that, it works for a while and gives me errors at seemingly random times. The errors seem to be inconsistent and not always the same, but they usually have Connection refused, invalid argument, OSError: [Errno 22] Invalid argument, urllib.error.URLError: <urlopen error [Errno 22] Invalid argument>, BrokenPipeError: [Errno 32] Broken pipe, or urllib.error.URLError: <urlopen error [Errno 61] Connection refused> in them.
Is there some limit to the number of connections the server can handle? I don't think the number of threads per se is the problem, because I wrote a simple function that did the slowed-down string comparison without running the web server, and called it with 500 simultaneous threads, and it worked fine. I don't think that simply making requests from that many threads is the problem, because I have made crawlers that used over 100 threads (all making simultaneous requests to the same website) and they worked fine. It looks like maybe the HTTPServer is not meant to reliably host production websites that get large amounts of traffic, but I am surprised that it is this easy to make it crash.
I tried gradually removing stuff from my code that looked unrelated to the problem, as I usually do when I diagnose mysterious bugs like this, but that wasn't very helpful in this case. It seemed like as I was removing seemingly unrelated code, the number of connections that the server could handle gradually increased, but there was not a clear cause of the crashes.
Does anyone know how to increase the number of requests I can make at once, or at least why this is happening?
My code is complicated, but I came up with this simple program that demonstrates the problem:
#!/usr/bin/env python3
import os
import random
from http.server import BaseHTTPRequestHandler, HTTPServer
from multiprocessing.dummy import Pool as ThreadPool
from socketserver import ForkingMixIn, ThreadingMixIn
from threading import Thread
from time import sleep
from urllib.error import HTTPError
from urllib.request import urlopen
class FancyHTTPServer(ThreadingMixIn, HTTPServer):
pass
class MyRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
sleep(random.uniform(0, 2))
self.send_response(200)
self.end_headers()
self.wfile.write(b"foo")
def log_request(self, code=None, size=None):
pass
def request_is_ok(number):
try:
urlopen("http://localhost:31415/test" + str(number))
except HTTPError:
return False
else:
return True
server = FancyHTTPServer(("localhost", 31415), MyRequestHandler)
try:
Thread(target=server.serve_forever).start()
with ThreadPool(200) as pool:
for i in range(10):
numbers = [random.randint(0, 99999) for j in range(20000)]
for j, result in enumerate(pool.imap(request_is_ok, numbers)):
if j % 20 == 0:
print(i, j)
finally:
server.shutdown()
server.server_close()
print("done testing server")
For some reason, the program above works fine unless it has over 100 threads or so, but my real code for the challenge can only handle 8 threads. If I run it with 9, I usually get connection errors, and with 10, I always get connection errors. I tried using concurrent.futures.ThreadPoolExecutor, concurrent.futures.ProcessPoolExecutor, and multiprocessing.pool instead of multiprocessing.dummy.pool and none of those seemed to help. I tried using a plain HTTPServer object (without the ThreadingMixIn) and that just made things run very slowly and didn't fix the problem. I tried using ForkingMixIn and that didn't fix it either.
What am I supposed to do about this? I am running Python 3.5.1 on a late-2013 MacBook Pro running OS X 10.11.3.
EDIT: I tried a few more things, including running the server in a process instead of a thread, as a simple HTTPServer, with the ForkingMixIn, and with the ThreadingMixIn. None of those helped.
EDIT: This problem is stranger than I thought. I tried making one script with the server, and another with lots of threads making requests, and running them in different tabs in my terminal. The process with the server ran fine, but the one making requests crashed. The exceptions were a mix of ConnectionResetError: [Errno 54] Connection reset by peer, urllib.error.URLError: <urlopen error [Errno 54] Connection reset by peer>, OSError: [Errno 41] Protocol wrong type for socket, urllib.error.URLError: <urlopen error [Errno 41] Protocol wrong type for socket>, urllib.error.URLError: <urlopen error [Errno 22] Invalid argument>.
I tried it with a dummy server like the one above, and if I limited the number of concurrent requests to 5 or fewer, it worked fine, but with 6 requests, the client process crashed. There were some errors from the server, but it kept going. The client crashed regardless of whether I was using threads or processes to make the requests. I then tried putting the slowed-down function in the server and it was able to handle 60 concurrent requests, but it crashed with 70. This seems like it may contradict the evidence that the problem is with the server.
EDIT: I tried most of the things I described using requests instead of urllib.request and ran into similar problems.
EDIT: I am now running OS X 10.11.4 and running into the same problems.
|
You're using the default listen() backlog value, which is probably the cause of a lot of those errors. This is not the number of simultaneous clients with connection already established, but the number of clients waiting on the listen queue before the connection is established. Change your server class to:
class FancyHTTPServer(ThreadingMixIn, HTTPServer):
def server_activate(self):
self.socket.listen(128)
128 is a reasonable limit. You might want to check socket.SOMAXCONN or your OS somaxconn if you want to increase it further. If you still have random errors under heavy load, you should check your ulimit settings and increase if needed.
I did that with your example and I got over 1000 threads running fine, so I think that should solve your problem.
Update
If it improved but it's still crashing with 200 simultaneous clients, then I'm pretty sure your main problem was the backlog size. Be aware that your problem is not the number of concurrent clients, but the number of concurrent connection requests. A brief explanation on what that means, without going too deep into TCP internals.
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(BACKLOG)
while running:
conn, addr = s.accept()
do_something(conn, addr)
In this example, the socket is now accepting connections on the given port, and the s.accept() call will block until a client connects. You can have many clients trying to connect simultaneously, and depending on your application you might not be able to call s.accept() and dispatch the client connection as fast as the clients are trying to connect. Pending clients are queued, and the max size of that queue is determined by the BACKLOG value. If the queue is full, clients will fail with a Connection Refused error.
Threading doesn't help, because what the ThreadingMixIn class does is to execute the do_something(conn, addr) call in a separate thread, so the server can return to the mainloop and the s.accept() call.
You can try increasing the backlog further, but there will be a point where that won't help because if the queue grows too large some clients will timeout before the server performs the s.accept() call.
So, as I said above, your problem is the number of simultaneous connection attempts, not the number of simultaneous clients. Maybe 128 is enough for your real application, but you're getting an error on your test because you're trying to connect with all 200 threads at once and flooding the queue.
Don't worry about ulimit unless you get a Too many open files error, but if you want to increase the backlog beyond 128, do some research on socket.SOMAXCONN. This is a good start: https://utcc.utoronto.ca/~cks/space/blog/python/AvoidSOMAXCONN
|
Why did Django 1.9 replace tuples () with lists [] in settings and URLs?
|
I am bit curious to know why Django 1.9 replaced tuples () with lists [] in settings, URLs and other configuration files
I just upgraded to Django 1.9 and noticed these changes. What is the logic behind them?
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles'
]
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
urls.py
urlpatterns = [
url(r'^', admin.site.urls),
]
Is anything different because of these changes?
|
It is explained in issue #8846 (emphasis mine):
In the documentation for âCreating your own settings there's a
recommendation which reads "For settings that are sequences, use
tuples instead of lists. This is purely for performance."
This is bunk. Profiling shows that tuples run no faster than lists for
most operations (certainly looping, which we are likely to do most
often). On the other hand, list-literal syntax has the advantage that
it doesn't collapse to a single value when you have a single item and
omit the trailing comma, like tuple syntax. Using list syntax is no
slower, more legible and less error prone. An often-expressed view in
the wider Python community seems that tuples should not be considered
as immutable lists. They are intended as fixed-length records - indeed
the mathematical concept of a tuple is quite distinct from that of a
sequence.
Also see this answer for a more up-to-date discussion.
Another answer (not directly related to this issue) demonstrates that accessing elements is actually faster with a list.
Update and further information: It is correct that the above issue was closed years ago, but I included it because it explained the rationale behind the decision and many similar discussions refer to the same ticket. The actual implementation decision was triggered after the following discussion on django-developers started by core Django developer Aymeric Augustin:
I prefer them [lists] for two reasons:
1) All these settings are sequences of similar things. Such values are
best represented with lists, unless they have to be immutable, in
which case a tuple can be used. (tuples are both ânamedtuples without
namesâ and âimmutable listsâ in Python.)
2) Lists arenât prone to the âmissing comma in single-item tupleâ
problem which bites beginners and experienced pythonistas alike.
Django even has code to defend against this mistake for a handful of
settings. Search for âtuple_settingsâ in the source.
And the switch to lists actually happened in issue #24149 which also referred to the above discussion.
|
Counterintuitive behaviour of int() in python
|
It's clearly stated in the docs that int(number) is a flooring type conversion:
int(1.23)
1
and int(string) returns an int if and only if the string is an integer literal.
int('1.23')
ValueError
int('1')
1
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other.
|
There is no special reason. Python is simply applying its general principle of not performing implicit conversions, which are well-known causes of problems, particularly for newcomers, in languages such as Perl and Javascript.
int(some_string) is an explicit request to convert a string to integer format; the rules for this conversion specify that the string must contain a valid integer literal representation. int(float) is an explicit request to convert a float to an integer; the rules for this conversion specify that the float's fractional portion will be truncated.
In order for int("3.1459") to return 3 the interpreter would have to implicitly convert the string to a float. Since Python doesn't support implicit conversions, it chooses to raise an exception instead.
|
Difference in sequence of query generated in Django and Postgres for select_for_update
|
I'm facing a strange situation, where sequence of query logged in Django and Postgres is different when using select_for_update() inside transaction.atomic() block.
Basically I've a ModelForm where I'm validating the cleaned_data against the database, for duplicate request. And then in create view's form_valid() method, I'm saving the instance. To have both the operation inside same transaction, I'm overriding post() method, and wrapping those two method calls inside transaction.atomic().
Here's the code for whatever I said above:
# Form
class MenuForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
user_id = kwargs.pop('user_id', None)
super(MenuForm, self).__init__(*args, **kwargs)
def clean(self):
cleaned_data = super(MenuForm, self).clean()
dish_name = cleaned_data.get('dish_name')
menus = Menu.objects.select_for_update().filter(user_id=self.user_id)
for menu in menus:
if menu.dish_name == dish_name:
self.add_error('dish_name', 'Dish already exists')
return cleaned_data
return cleaned_data
# CreateView
class MenuCreateView(CreateView):
form_class = MenuForm
def get_form_kwargs(self):
kwargs = super(MenuCreateView, self).get_form_kwargs()
kwargs.update({'user_id': self.request.session.get('user_id')})
return kwargs
def form_valid(self, form):
user = User.objects.get(id=self.request.session.get('user_id'))
form.instance.user = user
return super(MenuCreateView, self).form_valid(form)
def post(self, request, *args, **kwargs):
form = self.get_form()
with transaction.atomic():
if form.is_valid():
return self.form_valid(form)
else:
return self.form_invalid(form)
Now suppose I fire two request at the same time, to create a menu with same dish. I expect the second request to fail. But, both of them are passing. It looks like, the second transaction is not seeing the changes done in previous transaction. Because of which, the total menus remain the same in both the transaction that is returned by select_for_update().
Given that Postgres default Isolation level is READ COMMITTED, I expect the changes to be visible. So, I tried logging the queries to see that COMMIT; is fired at right time. Here's the query log by django and in postgres:
Django Log:
SELECT "menu"."id", "menu"."dish_id", "menu"."dish_name" FROM "menu" WHERE ("menu"."dish_name" = "Test Dish") FOR UPDATE; args=("Test Dish")
INSERT INTO "menu" ("dish_id", "dish_name") VALUES (2, "Test Dish") RETURNING "menu"."id"; args=(2, "Test Dish")
SELECT "menu"."id", "menu"."dish_id", "menu"."dish_name" FROM "menu" WHERE ("menu"."dish_name" = "Test Dish") FOR UPDATE; args=("Test Dish")
INSERT INTO "menu" ("dish_id", "dish_name") VALUES (2, "Test Dish") RETURNING "menu"."id"; args=(2, "Test Dish")
Postgres Log:
<2016-03-18 17:55:46.176 IST 0 2/31 56ebf3ca.aac0>LOG: statement: SHOW default_transaction_isolation
<2016-03-18 17:55:46.177 IST 0 2/32 56ebf3ca.aac0>LOG: statement: SET TIME ZONE 'UTC'
<2016-03-18 17:55:46.178 IST 0 2/33 56ebf3ca.aac0>LOG: statement: SELECT t.oid, typarray
FROM pg_type t JOIN pg_namespace ns
ON typnamespace = ns.oid
WHERE typname = 'hstore';
<2016-03-18 17:55:46.182 IST 0 2/34 56ebf3ca.aac0>LOG: statement: BEGIN
<2016-03-18 17:55:46.301 IST 0 3/2 56ebf3ca.aac1>LOG: statement: SHOW default_transaction_isolation
<2016-03-18 17:55:46.302 IST 0 3/3 56ebf3ca.aac1>LOG: statement: SET TIME ZONE 'UTC'
<2016-03-18 17:55:46.302 IST 0 3/4 56ebf3ca.aac1>LOG: statement: SELECT t.oid, typarray
FROM pg_type t JOIN pg_namespace ns
ON typnamespace = ns.oid
WHERE typname = 'hstore';
<2016-03-18 17:55:46.312 IST 0 3/5 56ebf3ca.aac1>LOG: statement: BEGIN
<2016-03-18 17:55:46.963 IST 0 3/5 56ebf3ca.aac1>LOG: statement: SELECT "menu"."id", "menu"."dish_id", "menu"."dish_name" FROM "menu"
WHERE ("menu"."dish_name" = "Test Dish") FOR UPDATE
<2016-03-18 17:55:46.964 IST 0 2/34 56ebf3ca.aac0>LOG: statement: SELECT "menu"."id", "menu"."dish_id", "menu"."dish_name" FROM "menu"
WHERE ("menu"."dish_name" = "Test Dish") FOR UPDATE
<2016-03-18 17:55:47.040 IST 23712 3/5 56ebf3ca.aac1>LOG: statement: INSERT INTO "menu" ("dish_id", "dish_name") VALUES (2, "Test Dish")RETURNING "menu"."id"
<2016-03-18 17:55:47.061 IST 23712 3/5 56ebf3ca.aac1>LOG: statement: COMMIT
<2016-03-18 17:55:47.229 IST 23713 2/34 56ebf3ca.aac0>LOG: statement: INSERT INTO "menu" ("dish_id", "dish_name") VALUES (2, "Test Dish")RETURNING "menu"."id"
<2016-03-18 17:55:47.231 IST 23713 2/34 56ebf3ca.aac0>LOG: statement: COMMIT
Postgres.conf:
max_connections = 100
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_line_prefix = '<%m %x %v %c>'
log_statement = 'all'
As you can see, the order of SELECT and INSERT queries in not the same in both the logs. I'm unable to understand why this would happen. Also, if you notice, the session_id for the SELECT queries in Postgres log is different. Could that explain something here?
And if this is expected behaviour, how could I solve the core issue here? Avoid concurrent INSERT queries, based on existing record.
UPDATE:
I didn't mention that the actual logic to ignore duplicate menu is not just based on dish name. The one above is simplified example.
Considering menu model as:
class Menu:
user_id = models.IntegerField()
dish = models.ForeignKey(Dish)
order_start_time = models.DateTimeField()
order_end_time = models.DateTimeField()
The actual logic goes like this:
Fetch all menu with the dish_name from db.
Check the order_start_time and order_end_time for all the those menu, and see if any of them overlaps with the order_start_time and order_end_time for the new menu. If conflict found, avoid addition.
So, we can add two menu for dish - d1, having order window - [9am-10am] and [2pm-3pm].
|
EDITED:
How to make Django validate overlapping reservations?
It is possible to add special method for a model validate_unique:
from django.db import models
from django.core.validators import ValidationError
from django.forms.forms import NON_FIELD_ERRORS
class Dish(models.Model):
name = models.CharField('Dish name', max_length=200)
class Menu(models.Model):
user_id = models.IntegerField()
dish = models.ForeignKey(Dish)
order_start_time = models.DateTimeField()
order_end_time = models.DateTimeField()
def validate_unique(self, *args, **kwargs):
# call inherited unique validators
super().validate_unique(*args, **kwargs) # or super(Menu, self) for Python2.7
# query if DB already has object with same dish
# and overlapping reservation
# [order_start_time, order_end_time]
qs = self.__class__._default_manager.filter(
order_start_time__lte=self.order_end_time,
order_end_time__gte=self.order_start_time,
dish=self.dish,
)
# and this object is not the same we are working with
if not self._state.adding and self.pk is not None:
qs = qs.exclude(pk=self.pk)
if qs.exists():
raise ValidationError({
NON_FIELD_ERRORS: ['Overlapping order dates for dish'],
})
Lets try it in console:
from core.models import *
m=Menu(user_id=1, dish_id=1, order_start_time='2016-03-22 10:00', order_end_time='2016-03-22 15:00')
m.validate_unique()
# no output here - all is ok
m.save()
print(m.id)
8
# lets add duplicate
m=Menu(user_id=1, dish_id=1, order_start_time='2016-03-22 12:00', order_end_time='2016-03-22 13:00')
m.validate_unique()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/el/tmp/hypothesis_test/menu/core/models.py", line 29, in validate_unique
NON_FIELD_ERRORS: ['Overlapping order dates for dish'],
django.core.exceptions.ValidationError: {'__all__': ['Overlapping order dates for dish']}
# excellent! dup is found!
# But! Django helps you find dups but allows you to add them to db if you want it!
# It's responsibility of your application not to add duplicates.
m.save()
print(m.id)
9
How to be sure noboby can add duplicate?
In this case you need to make a CONSTRAINT on database-level.
In PostgreSQL console:
CREATE EXTENSION btree_gist;
-- our table:
SELECT * FROM core_menu;
id | user_id | order_start_time | order_end_time | dish_id
----+---------+------------------------+------------------------+---------
8 | 1 | 2016-03-22 13:00:00+03 | 2016-03-22 18:00:00+03 | 1
9 | 1 | 2016-03-22 15:00:00+03 | 2016-03-22 16:00:00+03 | 1
DELETE FROM core_menu WHERE id=9; -- we should remove dups before adding unique constraint
ALTER TABLE core_menu
ADD CONSTRAINT core_menu_exclude_dish_same_tstzrange_constr
EXCLUDE USING gist (dish_id WITH =, tstzrange(order_start_time, order_end_time) WITH &&);
Now lets create the duplicate object and add it to db:
m=Menu(user_id=1, dish_id=1, order_start_time='2016-03-22 13:00', order_end_time='2016-03-22 14:00')
m.save()
Traceback (most recent call last):
File "/Users/el/tmp/hypothesis_test/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: ÐШÐÐÐÐ: конÑликÑÑÑÑее знаÑение клÑÑа наÑÑÑÐ°ÐµÑ Ð¾Ð³ÑаниÑение-иÑклÑÑение "core_menu_exclude_dish_same_tstzrange_constr"
DETAIL: Key (dish_id, tstzrange(order_start_time, order_end_time))=(1, ["2016-03-22 13:00:00+00","2016-03-22 14:00:00+00")) conflicts with existing key (dish_id, tstzrange(order_start_time, order_end_time))=(1, ["2016-03-22 10:00:00+00","2016-03-22 15:00:00+00")).
Excellent!
Now data is validated at program and db levels.
|
How to call a function with a dictionary that contains more items than the function has parameters?
|
I am looking for the best way to combine a function with a dictionary that contains more items than the function's inputs
basic **kwarg unpacking fails in this case:
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
foo(**d)
--> TypeError: foo() got an unexpected keyword argument 'c'
After some research I came up with the following approach:
import inspect
# utilities
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def combine(function,dict_):
'''combine a function with a dictionary that may contain more items than the function's inputs '''
filtered_dict = filter_dict(dict_,get_input_names(function))
return function(**filtered_dict)
# examples
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print combine(foo,d)
--> 3
My question is: is this a good way of dealing with this problem, or is there a better practice or is there a mechanism in the language that I'm missing perhaps?
|
How about making a decorator that would filter allowed keyword arguments only:
import inspect
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def filter_kwargs(func):
def func_wrapper(**kwargs):
return func(**filter_dict(kwargs, get_input_names(func)))
return func_wrapper
@filter_kwargs
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print(foo(**d))
What is nice about this decorator is that it is generic and reusable. And you would not need to change the way you call and use your target functions.
|
Alowing 'fuzzy' translations in django pages?
|
I've done some research and found that django translations don't show up when a string is marked as "fuzzy".
However, I haven't been able to find any documentation on whether I can override this behaviour.
Is there a Django setting that can be used to allow Django (or gettext) to use "fuzzy translations"?
I know a lot of the automated translations won't be perfect, but this is for demonstration, development and testing for an open-source product.
I'd rather have users be able to develop in their own language with "approximate" translations then use that as an incentive to check them off as they go.
|
It would be unfortunate to show these translations as some of them are most certainly wrong. You are supposed to remove the fuzzy tag when you update the translations and revise the guessed translations that are marked as fuzzy.
However, you may run a tool to quickly delete the fuzzy markers from a .po file: Removing all fuzzy entries of a PO file
UPDATE
Here is a great overview of the GNU gettext work-flow: https://www.gnu.org/software/gettext/manual/gettext.html#Overview
It is msgfmt that strips the fuzzy translations. It has an option --use-fuzzy that includes the fuzzy translations.
msgfmt is wrapped by compilemessages django admin command, which since version 1.8 has the --use-fuzzy option too (https://docs.djangoproject.com/en/1.9/ref/django-admin/#compilemessages)
|
Why is a double semicolon a SyntaxError in Python?
|
I know that semicolons are unnecessary in Python, but they can be used to cram multiple statements onto a single line, e.g.
>>> x = 42; y = 54
I always thought that a semicolon was equivalent to a line break. So I was a bit surprised to learn (h/t Ned Batchelder on Twitter) that a double semicolon is a SyntaxError:
>>> x = 42
>>> x = 42;
>>> x = 42;;
File "<stdin>", line 1
x = 42;;
^
SyntaxError: invalid syntax
I assumed the last program was equivalent to x = 42\n\n. Iâd have thought the statement between the semicolons was treated as an empty line, a no-op. Apparently not.
Why is this an error?
|
From the Python grammar, we can see that ; is not defined as \n. The parser expects another statement after a ;, except if there's a newline after it:
Semicolon w/ statement Maybe a semicolon Newline
\/ \/ \/ \/
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
That's why x=42;; doesn't work; because there isn't a statement between the two semicolons, as "nothing" isn't a statement. If there was any complete statement between them, like a pass or even just a 0, the code would work.
x = 42;0; # Fine
x = 42;pass; # Fine
x = 42;; # Syntax error
if x == 42:; print("Yes") # Syntax error - "if x == 42:" isn't a complete statement
|
Pythonic way to avoid "if x: return x" statements
|
I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
x and return x
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times.
|
Alternatively to Martijn's fine answer, you could chain or. This will return the first truthy value, or None if there's no truthy value:
def check_all_conditions():
return check_size() or check_color() or check_tone() or check_flavor() or None
Demo:
>>> x = [] or 0 or {} or -1 or None
>>> x
-1
>>> x = [] or 0 or {} or '' or None
>>> x is None
True
|
Python the same char not equals
|
I have a text in my database. I send some text from xhr to my view. Function find does not find some unicode chars.
I want find selected text using just:
text.find(selection)
but sometimes variable 'selection' has char like that:
Ä # in xhr unichr(281)
in variable 'text' there is a char:
ę # in db has two chars unichr(101) + unichr(808)
|
Here unicodedata.normalize might help you.
Basically if you normalize the data coming from the db, and normalize your selection to the same form, you should have a better result when using str.find, str.__contains__ (i.e. in), str.index, and friends.
>>> u1 = chr(281)
>>> u2 = chr(101) + chr(808)
>>> print(u1, u2)
Ä ę
>>> u1 == u2
False
>>> unicodedata.normalize('NFC', u2) == u1
True
NFC stands for the Normal Form Composed form. You can read up here for some description of the other possible forms.
|
Difference between "raise" and "raise e"?
|
In python, is there a difference between raise and raise e in an except block?
dis is showing me different results, but I don't know what it means.
What's the end behavior of both?
import dis
def a():
try:
raise Exception()
except Exception as e:
raise
def b():
try:
raise Exception()
except Exception as e:
raise e
dis.dis(a)
# OUT: 4 0 SETUP_EXCEPT 13 (to 16)
# OUT: 5 3 LOAD_GLOBAL 0 (Exception)
# OUT: 6 CALL_FUNCTION 0
# OUT: 9 RAISE_VARARGS 1
# OUT: 12 POP_BLOCK
# OUT: 13 JUMP_FORWARD 22 (to 38)
# OUT: 6 >> 16 DUP_TOP
# OUT: 17 LOAD_GLOBAL 0 (Exception)
# OUT: 20 COMPARE_OP 10 (exception match)
# OUT: 23 POP_JUMP_IF_FALSE 37
# OUT: 26 POP_TOP
# OUT: 27 STORE_FAST 0 (e)
# OUT: 30 POP_TOP
# OUT: 7 31 RAISE_VARARGS 0
# OUT: 34 JUMP_FORWARD 1 (to 38)
# OUT: >> 37 END_FINALLY
# OUT: >> 38 LOAD_CONST 0 (None)
# OUT: 41 RETURN_VALUE
dis.dis(b)
# OUT: 4 0 SETUP_EXCEPT 13 (to 16)
# OUT: 5 3 LOAD_GLOBAL 0 (Exception)
# OUT: 6 CALL_FUNCTION 0
# OUT: 9 RAISE_VARARGS 1
# OUT: 12 POP_BLOCK
# OUT: 13 JUMP_FORWARD 25 (to 41)
# OUT: 6 >> 16 DUP_TOP
# OUT: 17 LOAD_GLOBAL 0 (Exception)
# OUT: 20 COMPARE_OP 10 (exception match)
# OUT: 23 POP_JUMP_IF_FALSE 40
# OUT: 26 POP_TOP
# OUT: 27 STORE_FAST 0 (e)
# OUT: 30 POP_TOP
# OUT: 7 31 LOAD_FAST 0 (e)
# OUT: 34 RAISE_VARARGS 1
# OUT: 37 JUMP_FORWARD 1 (to 41)
# OUT: >> 40 END_FINALLY
# OUT: >> 41 LOAD_CONST 0 (None)
# OUT: 44 RETURN_VALUE
|
There is no difference in this case. raise without arguments will always raise the last exception thrown (which is also accessible with sys.exc_info()).
The reason the bytecode is different is because Python is a dynamic language and the interpreter doesn't really "know" that e refers to the (unmodified) exception that is currently being handled. But this may not always be the case, consider:
try:
raise Exception()
except Exception as e:
if foo():
e = OtherException()
raise e
What is e now? There is no way to tell when compiling the bytecode (only when actually running the program).
In simple examples like yours, it might be possible for the Python interpreter to "optimize" the bytecode, but so far no one has done this. And why should they? It's a micro-optimization at best and may still break in subtle ways in obscure conditions. There is a lot of other fruit that is hanging a lot lower than this and is more nutritious to boot ;-)
|
How to use inverse of a GenericRelation
|
I must be really misunderstanding something with the GenericRelation field from Django's content types framework.
To create a minimal self contained example, I will use the polls example app from the tutorial. Add a generic foreign key field into the Choice model, and make a new Thing model:
class Choice(models.Model):
...
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
thing = GenericForeignKey('content_type', 'object_id')
class Thing(models.Model):
choices = GenericRelation(Choice, related_query_name='things')
With a clean db, synced up tables, and create a few instances:
>>> poll = Poll.objects.create(question='the question', pk=123)
>>> thing = Thing.objects.create(pk=456)
>>> choice = Choice.objects.create(choice_text='the choice', pk=789, poll=poll, thing=thing)
>>> choice.thing.pk
456
>>> thing.choices.get().pk
789
So far so good - the relation works in both directions from an instance. But from a queryset, the reverse relation is very weird:
>>> Choice.objects.values_list('things', flat=1)
[456]
>>> Thing.objects.values_list('choices', flat=1)
[456]
Why the inverse relation gives me again the id from the thing? I expected instead the primary key of the choice, equivalent to the following result:
>>> Thing.objects.values_list('choices__pk', flat=1)
[789]
Those ORM queries generate SQL like this:
>>> print Thing.objects.values_list('choices__pk', flat=1).query
SELECT "polls_choice"."id" FROM "polls_thing" LEFT OUTER JOIN "polls_choice" ON ( "polls_thing"."id" = "polls_choice"."object_id" AND ("polls_choice"."content_type_id" = 10))
>>> print Thing.objects.values_list('choices', flat=1).query
SELECT "polls_choice"."object_id" FROM "polls_thing" LEFT OUTER JOIN "polls_choice" ON ( "polls_thing"."id" = "polls_choice"."object_id" AND ("polls_choice"."content_type_id" = 10))
The Django docs are generally excellent, but I can't understand why the second query or find any documentation of that behaviour - it seems to return data from the wrong table completely?
|
TL;DR This was a bug in Django 1.7 that was fixed in Django 1.8.
Fix commit: 1c5cbf5e5d5b350f4df4aca6431d46c767d3785a
Fix PR: GenericRelation filtering targets related model's pk
Bug ticket: Should filter on related model primary key value, not the object_id value
The change went directly to master and did not go under a deprecation period, which isn't too surprising given that maintaining backwards compatibility here would have been really difficult. More surprising is that there was no mention of the issue in the 1.8 release notes, since the fix changes behavior of currently working code.
The remainder of this answer is a description of how I found the commit using git bisect run. It's here for my own reference more than anything, so I can come back here if I ever need to bisect a large project again.
First we setup a django clone and a test project to reproduce the issue. I used virtualenvwrapper here, but you can do the isolation however you wish.
cd /tmp
git clone https://github.com/django/django.git
cd django
git checkout tags/1.7
mkvirtualenv djbisect
export PYTHONPATH=/tmp/django # get django clone into sys.path
python ./django/bin/django-admin.py startproject djbisect
export PYTHONPATH=$PYTHONPATH:/tmp/django/djbisect # test project into sys.path
export DJANGO_SETTINGS_MODULE=djbisect.mysettings
create the following file:
# /tmp/django/djbisect/djbisect/models.py
from django.db import models
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation
class GFKmodel(models.Model):
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
gfk = GenericForeignKey()
class GRmodel(models.Model):
related_gfk = GenericRelation(GFKmodel)
also this one:
# /tmp/django/djbisect/djbisect/mysettings.py
from djbisect.settings import *
INSTALLED_APPS += ('djbisect',)
Now we have a working project, create the test_script.py to use with git bisect run:
#!/usr/bin/env python
import subprocess, os, sys
db_fname = '/tmp/django/djbisect/db.sqlite3'
if os.path.exists(db_fname):
os.unlink(db_fname)
cmd = 'python /tmp/django/djbisect/manage.py migrate --noinput'
subprocess.check_call(cmd.split())
import django
django.setup()
from django.contrib.contenttypes.models import ContentType
from djbisect.models import GFKmodel, GRmodel
ct = ContentType.objects.get_for_model(GRmodel)
y = GRmodel.objects.create(pk=456)
x = GFKmodel.objects.create(pk=789, content_type=ct, object_id=y.pk)
query1 = GRmodel.objects.values_list('related_gfk', flat=1)
query2 = GRmodel.objects.values_list('related_gfk__pk', flat=1)
print(query1)
print(query2)
print(query1.query)
print(query2.query)
if query1[0] == 789 == query2[0]:
print('FIXED')
sys.exit(1)
else:
print('UNFIXED')
sys.exit(0)
The script must be executable, so add the flag with chmod +x test_script.py. It should be located in the directory that Django is cloned into, i.e. /tmp/django/test_script.py for me. This is because import django should pick up the locally checked-out django project first, not any version from site-packages.
The user interface of git bisect was designed to find out where bugs appeared, so the usual prefixes of "bad" and "good" are backwards when you're trying to find out when a certain bug was fixed. This may seem somewhat upside-down, but the test script should exit with success (return code 0) if the bug is present, and it should fail out (with nonzero return code) if the bug is fixed. This tripped me up a few times!
git bisect start --term-new=fixed --term-old=unfixed
git bisect fixed tags/1.8
git bisect unfixed tags/1.7
git bisect run ./test_script.py
So this process will do an automated search which eventually finds the commit where the bug was fixed. It takes some time, because there were a lot of commits between Django 1.7 and Django 1.8. It bisected 1362 revisions, roughly 10 steps, and eventually output:
1c5cbf5e5d5b350f4df4aca6431d46c767d3785a is the first fixed commit
commit 1c5cbf5e5d5b350f4df4aca6431d46c767d3785a
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Wed Dec 17 09:47:58 2014 +0200
Fixed #24002 -- GenericRelation filtering targets related model's pk
Previously Publisher.objects.filter(book=val) would target
book.object_id if book is a GenericRelation. This is inconsistent to
filtering over reverse foreign key relations, where the target is the
related model's primary key.
That's precisely the commit where the query has changed from the incorrect SQL (which gets data from the wrong table)
SELECT "djbisect_gfkmodel"."object_id" FROM "djbisect_grmodel" LEFT OUTER JOIN "djbisect_gfkmodel" ON ( "djbisect_grmodel"."id" = "djbisect_gfkmodel"."object_id" AND ("djbisect_gfkmodel"."content_type_id" = 8) )
into the correct version:
SELECT "djbisect_gfkmodel"."id" FROM "djbisect_grmodel" LEFT OUTER JOIN "djbisect_gfkmodel" ON ( "djbisect_grmodel"."id" = "djbisect_gfkmodel"."object_id" AND ("djbisect_gfkmodel"."content_type_id" = 8) )
Of course, from the commit hash we're able to find the pull request and the ticket easily on github. Hopefully this may help someone else one day too - bisecting Django can be tricky to setup due to the migrations!
|
Python send control + Q then control + A (special keys)
|
I need to send some special keystrokes and am unsure of how to do it.
I need to send Ctrl + Q followed by Ctrl + A to a terminal (I'm using Paramiko).
i have tried
shell = client.invoke_shell()
shell.send(chr(10))
time.sleep(5)
shell.send(chr(13))
shell.send('\x11')
shell.send('\x01')
print 'i tried'
I can see the two returns go in successfully, but then nothing, it doesnt quit the picocom
(also to note i have it the wrong way round, its expecting ctrl+a, then ctrl+q)
if it helps this is the device
http://www.cisco.com/c/en/us/td/docs/routers/access/interfaces/eesm/software/configuration/guide/4451_config.html#pgfId-1069760
as you can see at step 2
Step 2 Exit the session from the switch, press Ctrl-a and Ctrl-q from your keyboard:
Switch# <type ^a^q>
Thanks for using picocom
Router#
UPDATE:
i have tried \x01\x16\x11\n but this returns
Switch#
Switch#
*** baud: 9600
*** flow: none
*** parity: none
*** databits: 8
*** dtr: down
Switch#
this looks like it could be another special command?
|
Just as assumption: maybe pseudoterminal would help
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(...)
channel = Ñlient.get_transport().open_session()
channel.get_pty()
channel.settimeout(5)
channel.exec_command('\x11\x01')
|
Importing installed package from script raises "AttributeError: module has no attribute" or "ImportError: cannot import name"
|
I have a script named requests.py that imports the requests package. The script either can't access attributes from the package, or can't import them. Why isn't this working and how do I fix it?
The following code raises an AttributeError.
import requests
res = requests.get('http://www.google.ca')
print(res)
Traceback (most recent call last):
File "/Users/me/dev/rough/requests.py", line 1, in <module>
import requests
File "/Users/me/dev/rough/requests.py", line 3, in <module>
requests.get('http://www.google.ca')
AttributeError: module 'requests' has no attribute 'get'
The following code raises an ImportError.
from requests import get
res = get('http://www.google.ca')
print(res)
Traceback (most recent call last):
File "requests.py", line 1, in <module>
from requests import get
File "/Users/me/dev/rough/requests.py", line 1, in <module>
from requests import get
ImportError: cannot import name 'get'
The following code raises an ImportError.
from requests.auth import AuthBase
class PizzaAuth(AuthBase):
"""Attaches HTTP Pizza Authentication to the given Request object."""
def __init__(self, username):
# setup any auth-related data here
self.username = username
def __call__(self, r):
# modify and return the request
r.headers['X-Pizza'] = self.username
return r
Traceback (most recent call last):
File "requests.py", line 1, in <module>
from requests.auth import AuthBase
File "/Users/me/dev/rough/requests.py", line 1, in <module>
from requests.auth import AuthBase
ImportError: No module named 'requests.auth'; 'requests' is not a package
|
This happens because your local module named requests.py shadows the installed requests module you are trying to use. The current directory is prepended to sys.path, so the local name takes precedence over the installed name.
An extra debugging tip when this comes up is to look at the Traceback carefully, and realize that the name of your script in question is matching the module you are trying to import:
Your script. Notice the name you used:
File "/Users/me/dev/rough/requests.py", line 1, in <module>
The module you are trying to import: requests
Rename your module to something else to avoid the name collision.
Python may generate a requests.pyc file next to your requests.py file (in the __pycache__ directory in Python 3). Remove that as well after your rename, as the interpreter will still reference that file, re-producing the error. However, the pyc file in __pycache__ should not affect your code if the py file has been removed.
In the example, renaming the file to my_requests.py, removing requests.pyc, and running again successfully prints <Response [200]>.
|
limit() and sort() order pymongo and mongodb
|
Despite reading peoples answers stating that the sort is done first, evidence shows something different that the limit is done before the sort. Is there a way to force sort always first?
views = mongo.db.view_logging.find().sort([('count', 1)]).limit(10)
Whether I use .sort().limit() or .limit().sort(), the limit takes precedence. I wonder if this is something to do with pymongo...
|
According to the documentation, regardless of which goes first in your chain of commands, sort() would be always applied before the limit().
You can also study the .explain() results of your query and look at the execution stages - you will find that the sorting input stage examines all of the filtered (in your case all documents in the collection) and then the limit is applied.
Let's go through an example.
Imagine there is a foo database with a test collection having 6 documents:
>>> col = db.foo.test
>>> for doc in col.find():
... print(doc)
{'time': '2016-03-28 12:12:00', '_id': ObjectId('56f9716ce4b05e6b92be87f2'), 'value': 90}
{'time': '2016-03-28 12:13:00', '_id': ObjectId('56f971a3e4b05e6b92be87fc'), 'value': 82}
{'time': '2016-03-28 12:14:00', '_id': ObjectId('56f971afe4b05e6b92be87fd'), 'value': 75}
{'time': '2016-03-28 12:15:00', '_id': ObjectId('56f971b7e4b05e6b92be87ff'), 'value': 72}
{'time': '2016-03-28 12:16:00', '_id': ObjectId('56f971c0e4b05e6b92be8803'), 'value': 81}
{'time': '2016-03-28 12:17:00', '_id': ObjectId('56f971c8e4b05e6b92be8806'), 'value': 90}
Now, let's execute queries with different order of sort() and limit() and check the results and the explain plan.
Sort and then limit:
>>> from pprint import pprint
>>> cursor = col.find().sort([('time', 1)]).limit(3)
>>> sort_limit_plan = cursor.explain()
>>> pprint(sort_limit_plan)
{u'executionStats': {u'allPlansExecution': [],
u'executionStages': {u'advanced': 3,
u'executionTimeMillisEstimate': 0,
u'inputStage': {u'advanced': 6,
u'direction': u'forward',
u'docsExamined': 6,
u'executionTimeMillisEstimate': 0,
u'filter': {u'$and': []},
u'invalidates': 0,
u'isEOF': 1,
u'nReturned': 6,
u'needFetch': 0,
u'needTime': 1,
u'restoreState': 0,
u'saveState': 0,
u'stage': u'COLLSCAN',
u'works': 8},
u'invalidates': 0,
u'isEOF': 1,
u'limitAmount': 3,
u'memLimit': 33554432,
u'memUsage': 213,
u'nReturned': 3,
u'needFetch': 0,
u'needTime': 8,
u'restoreState': 0,
u'saveState': 0,
u'sortPattern': {u'time': 1},
u'stage': u'SORT',
u'works': 13},
u'executionSuccess': True,
u'executionTimeMillis': 0,
u'nReturned': 3,
u'totalDocsExamined': 6,
u'totalKeysExamined': 0},
u'queryPlanner': {u'indexFilterSet': False,
u'namespace': u'foo.test',
u'parsedQuery': {u'$and': []},
u'plannerVersion': 1,
u'rejectedPlans': [],
u'winningPlan': {u'inputStage': {u'direction': u'forward',
u'filter': {u'$and': []},
u'stage': u'COLLSCAN'},
u'limitAmount': 3,
u'sortPattern': {u'time': 1},
u'stage': u'SORT'}},
u'serverInfo': {u'gitVersion': u'6ce7cbe8c6b899552dadd907604559806aa2e9bd',
u'host': u'h008742.mongolab.com',
u'port': 53439,
u'version': u'3.0.7'}}
Limit and then sort:
>>> cursor = col.find().limit(3).sort([('time', 1)])
>>> limit_sort_plan = cursor.explain()
>>> pprint(limit_sort_plan)
{u'executionStats': {u'allPlansExecution': [],
u'executionStages': {u'advanced': 3,
u'executionTimeMillisEstimate': 0,
u'inputStage': {u'advanced': 6,
u'direction': u'forward',
u'docsExamined': 6,
u'executionTimeMillisEstimate': 0,
u'filter': {u'$and': []},
u'invalidates': 0,
u'isEOF': 1,
u'nReturned': 6,
u'needFetch': 0,
u'needTime': 1,
u'restoreState': 0,
u'saveState': 0,
u'stage': u'COLLSCAN',
u'works': 8},
u'invalidates': 0,
u'isEOF': 1,
u'limitAmount': 3,
u'memLimit': 33554432,
u'memUsage': 213,
u'nReturned': 3,
u'needFetch': 0,
u'needTime': 8,
u'restoreState': 0,
u'saveState': 0,
u'sortPattern': {u'time': 1},
u'stage': u'SORT',
u'works': 13},
u'executionSuccess': True,
u'executionTimeMillis': 0,
u'nReturned': 3,
u'totalDocsExamined': 6,
u'totalKeysExamined': 0},
u'queryPlanner': {u'indexFilterSet': False,
u'namespace': u'foo.test',
u'parsedQuery': {u'$and': []},
u'plannerVersion': 1,
u'rejectedPlans': [],
u'winningPlan': {u'inputStage': {u'direction': u'forward',
u'filter': {u'$and': []},
u'stage': u'COLLSCAN'},
u'limitAmount': 3,
u'sortPattern': {u'time': 1},
u'stage': u'SORT'}},
u'serverInfo': {u'gitVersion': u'6ce7cbe8c6b899552dadd907604559806aa2e9bd',
u'host': u'h008742.mongolab.com',
u'port': 53439,
u'version': u'3.0.7'}}
As you can see, in both cases the sort is applied first and affects all the 6 documents and then the limit limits the results to 3.
And, the execution plans are exactly the same:
>>> from copy import deepcopy # just in case
>>> cursor = col.find().sort([('time', 1)]).limit(3)
>>> sort_limit_plan = deepcopy(cursor.explain())
>>> cursor = col.find().limit(3).sort([('time', 1)])
>>> limit_sort_plan = deepcopy(cursor.explain())
>>> sort_limit_plan == limit_sort_plan
True
Also see:
How do you tell Mongo to sort a collection before limiting the results?
|
All possible ways to interleave two strings
|
I am trying to generate all possible ways to interleave any two arbitrary strings in Python.
For example: If the two strings are 'ab' and 'cd', the output I wish to get is:
['abcd', 'acbd', 'acdb', 'cabd', 'cadb', 'cdab']
See a is always before b (and c before d). I am struggling to find a solution to this. I have tried itertools as shown below:
import itertools
def shuffle(s,t):
string = s+t
for i in itertools.permutations(string):
print(''.join(i))
shuffle('ab','cd')
But as expected, this returns all possible permutations disregarding order of a and b (and c and d).
|
The Idea
Let the two strings you want to interleave be s and t. We will use recursion to generate all the possible ways to interleave these two strings.
If at any point of time we have interleaved the first i characters of s and the first j characters of t to create some string res, then we have two ways to interleave them for the next step-
Append the i+1 th character of s to res
Append the j+1 th character of t to res
We continue this recursion till all characters of both the strings have been used and then we store this result in a list of strings lis as in the code below.
The Code
def interleave(s, t, res, i, j, lis):
if i == len(s) and j == len(t):
lis.append(res)
return
if i < len(s):
interleave(s, t, res + s[i], i + 1, j, lis)
if j < len(t):
interleave(s, t, res + t[j], i, j + 1, lis)
l = []
s = "ab"
t = "cd"
interleave(s, t, "", 0, 0, l)
print l
Output
['abcd', 'acbd', 'acdb', 'cabd', 'cadb', 'cdab']
This implementation is as efficient as we can get (at least asymptotically) since we never generate the same string twice.
|
Why can't I break out of this itertools infinite loop?
|
In the REPL, we can usually interrupt an infinite loop with a sigint, i.e. ctrl+c, and regain control in the interpreter.
>>> while True: pass
...
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyboardInterrupt
>>>
But in this loop, the interrupt seems to be blocked and I have to kill the parent process to escape.
>>> *x, = itertools.repeat('x')
^C^C^C^C^C^C^C^C^\^\^\^\^\^Z^Z^Z^Z
Why is that?
|
The KeyboardInterrupt is checked after each Python instruction. itertools.repeat and the tuple generation is handled in C Code. The interrupt is handled afterwards, i.e. never.
|
Iterator selector in Python
|
Is there a standard pythonic way of selecting a value from a list of provided iterators without advancing those that were not selected?
Something in the vein of this for two iterators (don't judge this too hard: it was quickly thrown together just to illustrate the idea):
def iselect(i1, i2, f):
e1_read = False
e2_read = False
while True:
try:
if not e1_read:
e1 = next(i1)
e1_read = True
if not e2_read:
e2 = next(i2)
e2_read = True
if f(e1, e2):
yield e1
e1_read = False
else:
yield e2
e2_read = False
except StopIteration:
return
Note that if one uses something like this instead:
[e1 if f(e1, e2) else e2 for (e1, e2) in zip(i1, i2)]
then the non-selected iterator advances every time, which is not what I want.
|
The more-itertools package has a peekable wrapper for iterators. It would seem like this should allow for a very clean solution if I understand your question correctly. You need to peek at the current values of a set of iterators and only modify the chosen iterator by calling next() on it.
from more_itertools import peekable
# the implementation of iselect can be very clean if
# the iterators are peekable
def iselect(peekable_iters, selector):
"""
Parameters
----------
peekable_iters: list of peekables
This is the list of iterators which have been wrapped using
more-itertools peekable interface.
selector: function
A function that takes a list of values as input, and returns
the index of the selected value.
"""
while True:
peeked_vals = [it.peek(None) for it in peekable_iters]
selected_idx = selector(peeked_vals) # raises StopIteration
yield next(peekable_iters[selected_idx])
Test this code:
# sample input iterators for testing
# assume python 3.x so range function returns iterable
iters = [range(i,5) for i in range(4)]
# the following could be encapsulated...
peekables = [peekable(it) for it in iters]
# sample selection function, returns index of minimum
# value among those being compared, or StopIteration if
# one of the lists contains None
def selector_func(vals_list):
if None in vals_list:
raise StopIteration
else:
return vals_list.index(min(vals_list))
for val in iselect(peekables, selector_func):
print(val)
Output:
0
1
1
2
2
2
3
3
3
3
4
|
pip install - locale.Error: unsupported locale setting
|
Full stacktrace:
â ~ pip install virtualenv
Traceback (most recent call last):
File "/usr/bin/pip", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.4/site-packages/pip/__init__.py", line 215, in main
locale.setlocale(locale.LC_ALL, '')
File "/usr/lib64/python3.4/locale.py", line 592, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
On the same server, I previously ran pip install virtualenv and it's was python 2.7.x at that time.
Now, I've just installed python3.4 using curl https://bootstrap.pypa.io/get-pip.py | python3.4.
â ~ pip --version
pip 8.1.1 from /usr/lib/python3.4/site-packages (python 3.4)
pip uninstall virtualenv throws the same error too
|
try it:
$ export LC_ALL=C
Here is my locale settings:
$ locale
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_PAPER="C"
LC_NAME="C"
LC_ADDRESS="C"
LC_TELEPHONE="C"
LC_MEASUREMENT="C"
LC_IDENTIFICATION="C"
LC_ALL=C
Python2.7
$ uname -a
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u6 (2015-11-09) x86_64 GNU/Linux
$ python --version
Python 2.7.9
$ pip --version
pip 8.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)
$ unset LC_ALL
$ pip install virtualenv
Traceback (most recent call last):
File "/usr/local/bin/pip", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 215, in main
locale.setlocale(locale.LC_ALL, '')
File "/usr/lib/python2.7/locale.py", line 579, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
$ export LC_ALL=C
$ pip install virtualenv
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /usr/local/lib/python2.7/dist-packages
|
map vs list; why different behaviour?
|
In the course of implementing the "Variable Elimination" algorithm for a Bayes' Nets program, I encountered an unexpected bug that was the result of an iterative map transformation of a sequence of objects.
For simplicity's sake, I'll use an analogous piece of code here:
>>> nums = [1, 2, 3]
>>> for x in [4, 5, 6]:
... # Uses n if x is odd, uses (n + 10) if x is even
... nums = map(
... lambda n: n if x % 2 else n + 10,
... nums)
...
>>> list(nums)
[31, 32, 33]
This is definitely the wrong result. Since [4, 5, 6] contains two even numbers, 10 should be added to each element at most twice. I was getting unexpected behaviour with this in the VE algorithm as well, so I modified it to convert the map iterator to a list after each iteration.
>>> nums = [1, 2, 3]
>>> for x in [4, 5, 6]:
... # Uses n if x is odd, uses (n + 10) if x is even
... nums = map(
... lambda n: n if x % 2 else n + 10,
... nums)
... nums = list(nums)
...
>>> list(nums)
[21, 22, 23]
From my understanding of iterables, this modification shouldn't change anything, but it does. Clearly, the n + 10 transform for the not x % 2 case is applied one fewer times in the list-ed version.
My Bayes Nets program worked as well after finding this bug, but I'm looking for an explanation as to why it occurred.
|
The answer is very simple: map is a lazy function in Python 3, it returns an iterable object (in Python 2 it returns a list). Let me add some output to your example:
In [6]: nums = [1, 2, 3]
In [7]: for x in [4, 5, 6]:
...: nums = map(lambda n: n if x % 2 else n + 10, nums)
...: print(x)
...: print(nums)
...:
4
<map object at 0x7ff5e5da6320>
5
<map object at 0x7ff5e5da63c8>
6
<map object at 0x7ff5e5da6400>
In [8]: print(x)
6
In [9]: list(nums)
Out[9]: [31, 32, 33]
Note the In[8] - the value of x is 6. We could also transform the lambda function, passed to map in order to track the value of x:
In [10]: nums = [1, 2, 3]
In [11]: for x in [4, 5, 6]:
....: nums = map(lambda n: print(x) or (n if x % 2 else n + 10), nums)
....:
In [12]: list(nums)
6
6
6
6
6
6
6
6
6
Out[12]: [31, 32, 33]
Because map is lazy, it evaluates when list is being called. However, the value of x is 6 and that is why it produces confusing output. Evaluating nums inside the loop produces expected output.
In [13]: nums = [1, 2, 3]
In [14]: for x in [4, 5, 6]:
....: nums = map(lambda n: print(x) or (n if x % 2 else n + 10), nums)
....: nums = list(nums)
....:
4
4
4
5
5
5
6
6
6
In [15]: nums
Out[15]: [21, 22, 23]
|
Cycle a list from alternating sides
|
Given a list
a = [0,1,2,3,4,5,6,7,8,9]
how can I get
b = [0,9,1,8,2,7,3,6,4,5]
That is, produce a new list in which each successive element is alternately taken from the two sides of the original list?
|
>>> [a[-i//2] if i % 2 else a[i//2] for i in range(len(a))]
[0, 9, 1, 8, 2, 7, 3, 6, 4, 5]
Explanation:
This code picks numbers from the beginning (a[i//2]) and from the end (a[-i//2]) of a, alternatingly (if i%2 else). A total of len(a) numbers are picked, so this produces no ill effects even if len(a) is odd.
[-i//2 for i in range(len(a))] yields 0, -1, -1, -2, -2, -3, -3, -4, -4, -5,
[ i//2 for i in range(len(a))] yields 0, 0, 1, 1, 2, 2, 3, 3, 4, 4,
and i%2 alternates between False and True,
so the indices we extract from a are: 0, -1, 1, -2, 2, -3, 3, -4, 4, -5.
My assessment of pythonicness:
The nice thing about this one-liner is that it's short and shows symmetry (+i//2 and -i//2).
The bad thing, though, is that this symmetry is deceptive:
One might think that -i//2 were the same as i//2 with the sign flipped. But in Python, integer division returns the floor of the result instead of truncating towards zero. So -1//2 == -1.
Also, I find accessing list elements by index less pythonic than iteration.
|
Variable assignment faster than one liner
|
I have encountered this weird behavior and failed to explain it. These are the benchmarks:
py -3 -m timeit "tuple(range(2000)) == tuple(range(2000))"
10000 loops, best of 3: 97.7 usec per loop
py -3 -m timeit "a = tuple(range(2000)); b = tuple(range(2000)); a==b"
10000 loops, best of 3: 70.7 usec per loop
How come comparison with variable assignment is faster than using a one liner with temporary variables by more than 27%?
By the Python docs, garbage collection is disabled during timeit so it can't be that. Is it some sort of an optimization?
The results may also be reproduced in Python 2.x though to lesser extent.
Running Windows 7, CPython 3.5.1, Intel i7 3.40 GHz, 64 bit both OS and Python. Seems like a different machine I've tried running at Intel i7 3.60 GHz with Python 3.5.0 does not reproduce the results.
Running using the same Python process with timeit.timeit() @ 10000 loops produced 0.703 and 0.804 respectively. Still shows although to lesser extent. (~12.5%)
|
My results were similar to yours: the code using variables was pretty consistently 10-20 % faster. However when I used IPython on the very same Python 3.4, I got these results:
In [1]: %timeit -n10000 -r20 tuple(range(2000)) == tuple(range(2000))
10000 loops, best of 20: 74.2 µs per loop
In [2]: %timeit -n10000 -r20 a = tuple(range(2000)); b = tuple(range(2000)); a==b
10000 loops, best of 20: 75.7 µs per loop
Notably, I never managed to get even close to the 74.2 µs for the former when I used -mtimeit from the command line.
So I decided to run the command with strace and indeed there is something fishy going on:
% strace -o withoutvars python3 -m timeit "tuple(range(2000)) == tuple(range(2000))"
10000 loops, best of 3: 134 usec per loop
% strace -o withvars python3 -mtimeit "a = tuple(range(2000)); b = tuple(range(2000)); a==b"
10000 loops, best of 3: 75.8 usec per loop
% grep mmap withvars|wc -l
46
% grep mmap withoutvars|wc -l
41149
Now that is a good reason for the difference. The code that does not use variables causes the mmap system call be called almost 1000x more than the one that uses intermediate variables.
The withoutvars is full of mmap/munmap for a 256k region; these same lines are repeated over and over again:
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000
munmap(0x7f32e56de000, 262144) = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000
munmap(0x7f32e56de000, 262144) = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000
munmap(0x7f32e56de000, 262144) = 0
The mmap call seems to be coming from the function _PyObject_ArenaMmap from Objects/obmalloc.c; the obmalloc.c also contains the macro ARENA_SIZE, which is #defined to be (256 << 10) (that is 262144); similarly the munmap matches the _PyObject_ArenaMunmap from obmalloc.c.
obmalloc.c says that
Prior to Python 2.5, arenas were never free()'ed. Starting with Python 2.5,
we do try to free() arenas, and use some mild heuristic strategies to increase
the likelihood that arenas eventually can be freed.
Thus these heuristics and the fact that Python object allocator releases these free arenas as soon as they're emptied lead to python3 -mtimeit 'tuple(range(2000)) == tuple(range(2000))' triggering pathological behaviour where one 256 kiB memory area is re-allocated and released repeatedly; and this allocation happens with mmap/munmap, which is comparatively costly as they're system calls - furthermore, mmap with MAP_ANONYMOUS requires that the newly mapped pages must be zeroed - even though Python wouldn't care.
The behaviour does not seem to be present with the code that uses an intermediate variable, possibly because it is using slightly more memory and never actually freeing all the objects from the last memory arena. Most notably it cannot be guaranteed that the code using intermediate variables is always faster - indeed in some setups it might be that using intermediate variables will result in extra mmap calls, whereas the code that compares return values directly might be fine.
|
Making len() work with instance methods
|
Is there a way to make len() work with instance methods without modifying the class?
Example of my problem:
>>> class A(object):
... pass
...
>>> a = A()
>>> a.__len__ = lambda: 2
>>> a.__len__()
2
>>> len(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'A' has no len()
Note:
different instances of A will have different __len__ methods attached
I cannot change the class A
|
No. Python always looks up special methods through the object's class. There are several good reasons for this, one being that repr(A) should use type(A).__repr__ instead of A.__repr__, which is intended to handle instances of A instead of the A class itself.
If you want different instances of A to compute their len differently, consider having __len__ delegate to another method:
class A(object):
def __len__(self):
return self._len()
a = A()
a._len = lambda: 2
|
Is it possible to get pip to print the configuration it is using?
|
Is there any way to get pip to print the config it will attempt to use? For debugging purposes it would be very nice to know that:
config.ini files are in the correct place and pip is finding them.
The precedence of the config settings is treated in the way one would expect from the docs
|
Updated(answering comment)
You can start python console and do. (If you have virtaulenv don't forget to activate it first)
from pip import create_main_parser
parser = create_main_parser()
# print all config files that it will try to read
print(parser.files)
# reads parser files that are actually found and prints their names
print(parser.config.read(parser.files))
create_main_parser is function that creates parser which pip uses to read params from command line(optparse) and loading configs(configparser)
Possible file names for configurations are generated in get_config_files. Including PIP_CONFIG_FILE environment variable if it set.
parser.config is instance of RawConfigParser so all generated file names in get_config_files are passed to parser.config.read
.
Attempt to read and parse a list of filenames, returning a list of filenames which were successfully parsed. If filenames is a string, it is treated as a single filename. If a file named in filenames cannot be opened, that file will be ignored. This is designed so that you can specify a list of potential configuration file locations (for example, the current directory, the userâs home directory, and some system-wide directory), and all existing configuration files in the list will be read. If none of the named files exist, the ConfigParser instance will contain an empty dataset. An application which requires initial values to be loaded from a file should load the required file or files using read_file() before calling read() for any optional files:
|
Square root of complex numbers in python
|
I have run across some confusing behaviour with square roots of complex numbers in python. Running this code:
from cmath import sqrt
a = 0.2
b = 0.2 + 0j
print(sqrt(a / (a - 1)))
print(sqrt(b / (b - 1)))
gives the output
0.5j
-0.5j
A similar thing happens with
print(sqrt(-1 * b))
print(sqrt(-b))
It appears these pairs of statements should give the same answer?
|
Both answers (+0.5j and -0.5j) are correct, since they are complex conjugates -- i.e. the real part is identical, and the imaginary part is sign-flipped.
Looking at the code makes the behavior clear - the imaginary part of the result always has the same sign as the imaginary part of the input, as seen in lines 790 and 793:
r.imag = copysign(d, z.imag);
Since a/(a-1) is 0.25 which is implicitly 0.25+0j you get a positive result; b/(b-1) produces 0.25-0j (for some reason; not sure why it doesn't result in 0.25+0j tbh) so your result is similarly negative.
EDIT: This question has some useful discussion on the same issue.
|
What does x[x < 2] = 0 mean in Python?
|
I came across some code with a line similar to
x[x<2]=0
Playing around with variations, I am still stuck on what this syntax does.
Examples:
>>> x = [1,2,3,4,5]
>>> x[x<2]
1
>>> x[x<3]
1
>>> x[x>2]
2
>>> x[x<2]=0
>>> x
[0, 2, 3, 4, 5]
|
This only makes sense with NumPy arrays. The behavior with lists is useless, and specific to Python 2 (not Python 3). You may want to double-check if the original object was indeed a NumPy array (see further below) and not a list.
But in your code here, x is a simple list.
Since
x < 2
is False
i.e 0, therefore
x[x<2] is x[0]
x[0] gets changed.
Conversely, x[x>2] is x[True] or x[1]
So, x[1] gets changed.
Why does this happen?
The rules for comparison are:
When you order two strings or two numeric types the ordering is done in the expected way (lexicographic ordering for string, numeric ordering for integers).
When you order a numeric and a non-numeric type, the numeric type comes first.
When you order two incompatible types where neither is numeric, they are ordered by the alphabetical order of their typenames:
So, we have the following order
numeric < list < string < tuple
See the accepted answer for How does Python compare string and int?.
If x is a NumPy array, then the syntax makes more sense because of boolean array indexing. In that case, x < 2 isn't a boolean at all; it's an array of booleans representing whether each element of x was less than 2. x[x < 2] = 0 then selects the elements of x that were less than 2 and sets those cells to 0. See Indexing.
>>> x = np.array([1., -1., -2., 3])
>>> x < 0
array([False, True, True, False], dtype=bool)
>>> x[x < 0] += 20 # All elements < 0 get increased by 20
>>> x
array([ 1., 19., 18., 3.]) # Only elements < 0 are affected
|
Compare two large dictionaries and create lists of values for keys they have in common
|
I have a two dictionaries like:
dict1 = { (1,2) : 2, (2,3): 3, (1,3): 3}
dict2 = { (1,2) : 1, (1,3): 2}
What I want as output is two list of values for the items which exist in both dictionaries:
[2,3]
[1,2]
What I am doing right now is something like this:
list1 = []
list2 = []
for key in dict1.keys():
if key in dict2.keys():
list1.append(dict1.get(key))
list2.append(dict2.get(key))
This code is taking too long running which is not something that I am looking forward to. I was wondering if there might be a more efficient way of doing it?
|
commons = set(dict1).intersection(set(dict2))
list1 = [dict1[k] for k in commons]
list2 = [dict2[k] for k in commons]
|
Decorator for a class method that caches return value after first access
|
My problem, and why
I'm trying to write a decorator for a class method, @cachedproperty. I want it to behave so that when the method is first called, the method is replaced with its return value. I also want it to behave like @property so that it doesn't need to be explicitly called. Basically, it should be indistinguishable from @property except that it's faster, because it only calculates the value once and then stores it. My idea is that this would not slow down instantiation like defining it in __init__ would. That's why I want to do this.
What I tried
First, I tried to override the fget method of the property, but it's read-only.
Next, I figured I'd try to implement a decorator that does needs to be called the first time but then caches the values. This isn't my final goal of a property-type decorator that never needs to be called, but I thought this would be a simpler problem to tackle first. In other words, this is a not-working solution to a slightly simpler problem.
I tried:
def cachedproperty(func):
""" Used on methods to convert them to methods that replace themselves
with their return value once they are called. """
def cache(*args):
self = args[0] # Reference to the class who owns the method
funcname = inspect.stack()[0][3] # Name of the function, so that it can be overridden.
setattr(self, funcname, func()) # Replace the function with its value
return func() # Return the result of the function
return cache
However, this doesn't seem work. I tested this with:
>>> class Test:
... @cachedproperty
... def test(self):
... print "Execute"
... return "Return"
...
>>> Test.test
<unbound method Test.cache>
>>> Test.test()
but I get an error about how the class didn't pass itself to the method:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method cache() must be called with Test instance as first argument (got nothing instead)
At this point, me and my limited knowledge of deep Python methods are very confused, and I have no idea where my code went wrong or how to fix it. (I've never tried to write a decorator before)
The question
How can I write a decorator that will return the result of calling a class method the first time it's accessed (like @property does), and be replaced with a cached value for all subsequent queries?
I hope this question isn't too confusing, I tried to explain it as well as I could.
|
If you don't mind alternative solutions, I'd recommend lru_cache
for example
from functools import lru_cache
class Test:
@property
@lru_cache(maxsize=None)
def calc(self):
print("Calculating")
return 1
Expected output
In [2]: t = Test()
In [3]: t.calc
Calculating
Out[3]: 1
In [4]: t.calc
Out[4]: 1
|
Let a class behave like it's a list in Python
|
I have a class which is essentially a collection/list of things. But I want to add some extra functions to this list. What I would like, is the following:
I have an instance li = MyFancyList(). Variable li should behave as it was a list whenever I use it as a list: [e for e in li], li.expand(...), for e in li.
Plus it should have some special functions like li.fancyPrint(), li.getAMetric(), li.getName().
I currently use the following approach:
class MyFancyList:
def __iter__(self):
return self.li
def fancyFunc(self):
# do something fancy
This is ok for usage as iterator like [e for e in li], but I do not have the full list behavior like li.expand(...).
A first guess is to inherit list into MyFancyList. But is that the recommended pythonic way to do? If yes, what is to consider? If no, what would be a better approach?
|
If you want only part of the list behavior, use composition (i.e. your instances hold a reference to an actual list) and implement only the methods necessary for the behavior you desire. These methods should delegate the work to the actual list any instance of your class holds a reference to, for example:
def __getitem__(self, item):
return self.li[item] # delegate to li.__getitem__
Implementing __getitem__ alone will give you a surprising amount of features, for example iteration and slicing.
>>> class WrappedList:
... def __init__(self, lst):
... self._lst = lst
... def __getitem__(self, item):
... return self._lst[item]
...
>>> w = WrappedList([1, 2, 3])
>>> for x in w:
... x
...
1
2
3
>>> w[1:]
[2, 3]
If you want the full behavior of a list, inherit from collections.UserList. UserList is a full Python implementation of the list datatype.
So why not inherit from list directly?
One major problem with inheriting directly from list (or any other builtin written in C) is that the code of the builtins may or may not call special methods overridden in classes defined by the user. Here's a relevant excerpt from the pypy docs:
Officially, CPython has no rule at all for when exactly overridden method of subclasses of built-in types get implicitly called or not. As an approximation, these methods are never called by other built-in methods of the same object. For example, an overridden __getitem__ in a subclass of dict will not be called by e.g. the built-in get method.
Another quote, from Luciano Ramalho's Fluent Python, page 351:
Subclassing built-in types like dict or list or str directly is error-
prone because the built-in methods mostly ignore user-defined
overrides. Instead of subclassing the built-ins, derive your classes
from UserDict , UserList and UserString from the collections
module, which are designed to be easily extended.
... and more, page 370+:
Misbehaving built-ins: bug or feature?
The built-in dict , list and str types are essential building blocks of Python itself, so
they must be fast â any performance issues in them would severely impact pretty much
everything else. Thatâs why CPython adopted the shortcuts that cause their built-in
methods to misbehave by not cooperating with methods overridden by subclasses.
After playing around a bit, the issues with the list builtin seem to be less critical (I tried to break it in Python 3.4 for a while but did not find a really obvious unexpected behavior), but I still wanted to post a demonstration of what can happen in principle, so here's one with a dict and a UserDict:
>>> class MyDict(dict):
... def __setitem__(self, key, value):
... super().__setitem__(key, [value])
...
>>> d = MyDict(a=1)
>>> d
{'a': 1}
>>> class MyUserDict(UserDict):
... def __setitem__(self, key, value):
... super().__setitem__(key, [value])
...
>>> m = MyUserDict(a=1)
>>> m
{'a': [1]}
As you can see, the __init__ method from dict ignored the overridden __setitem__ method, while the __init__ method from our UserDict did not.
|
Django REST Framework + Django REST Swagger + ImageField
|
I created a simple Model with an ImageField and I wanna make an api view with django-rest-framework + django-rest-swagger, that is documented and is able to upload the file.
Here is what I got:
models.py
from django.utils import timezone
from django.db import models
class MyModel(models.Model):
source = models.ImageField(upload_to=u'/photos')
is_active = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return u"photo {0}".format(self.source.url)
serializer.py
from .models import MyModel
class MyModelSerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
fields = [
'id',
'source',
'created_at',
]
views.py
from rest_framework import generics
from .serializer import MyModelSerializer
class MyModelView(generics.CreateAPIView):
serializer_class = MyModelSerializer
parser_classes = (FileUploadParser, )
def post(self, *args, **kwargs):
"""
Create a MyModel
---
parameters:
- name: source
description: file
required: True
type: file
responseMessages:
- code: 201
message: Created
"""
return super(MyModelView, self).post(self, *args, **kwargs)
urls.py
from weddings.api.views import MyModelView
urlpatterns = patterns(
'',
url(r'^/api/mymodel/$', MyModelView.as_view()),
)
For me this should be pretty simple. However, I can't make the upload work. I always get this error response:
I've read this part of the documentation from django-rest-framework:
If the view used with FileUploadParser is called with a filename URL keyword argument, then that argument will be used as the filename. If it is called without a filename URL keyword argument, then the client must set the filename in the Content-Disposition HTTP header. For example Content-Disposition: attachment; filename=upload.jpg.
However the Header is being passed by django-rest-swagger in the Request Payload property (from chrome console).
If any more info is necessary, please let me know.
I'm using Django==1.8.8, djangorestframework==3.3.2 and django-rest-swagger==0.3.4.
|
I got this working by making a couple of changes to your code.
First, in models.py, change ImageField name to file and use relative path to upload folder. When you upload file as binary stream, it's available in request.data dictionary under file key (request.data.get('file')), so the cleanest option is to map it to the model field with the same name.
from django.utils import timezone
from django.db import models
class MyModel(models.Model):
file = models.ImageField(upload_to=u'photos')
is_active = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return u"photo {0}".format(self.file.url)
In serializer.py, rename source field to file:
class MyModelSerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
fields = ('id', 'file', 'created_at')
In views.py, don't call super, but call create():
from rest_framework import generics
from rest_framework.parsers import FileUploadParser
from .serializer import MyModelSerializer
class MyModelView(generics.CreateAPIView):
serializer_class = MyModelSerializer
parser_classes = (FileUploadParser,)
def post(self, request, *args, **kwargs):
"""
Create a MyModel
---
parameters:
- name: file
description: file
required: True
type: file
responseMessages:
- code: 201
message: Created
"""
return self.create(request, *args, **kwargs)
I've used Postman Chrome extension to test this. I've uploaded images as binaries and I've manually set two headers:
Content-Disposition: attachment; filename=upload.jpg
Content-Type: */*
|
Image processing issues with blood vessels
|
I'm trying to extract the blood vessels from an image, and to do so, I'm first equalizing the image, applying CLAHE histogram to obtain the following result:
clahe = cv2.createCLAHE(clipLimit=100.0, tileGridSize=(100,100))
self.cl1 = clahe.apply(self.result_array)
self.cl1 = 255 - self.cl1
And then I'm using OTSU threshold to extract the blood vessels, but failing to do it well:
self.ret, self.thresh = cv2.threshold(self.cl1, 0,255,cv2.THRESH_BINARY + cv2.THRESH_OTSU)
kernel = np.ones((1,1),np.float32)/1
self.thresh = cv2.erode(self.thresh, kernel, iterations=3)
self.thresh = cv2.dilate(self.thresh, kernel, iterations=3)
Here's the result:
Obviously there's a lot of noise. I've tried using Median blur, but it just clusters the noise and makes it into a blob, in some places. How do I go about removing the noise to get the blood vessels?
This is the original image from which I'm trying to extract the blood vessels:
|
Getting really good results is a difficult problem (you'll probably have to somehow model the structure of the blood vessels and the noise) but you can probably still do better than filtering.
One technique for addressing this kind of problems, inspired by the Canny edge detector, is using two thresholds - [hi,low] and classifying a pixel p with response r as belonging to a blood vessel V if r > hi || (r > lo && one of p's neighbors is in V).
Also, when it comes to filtering, both bilateral filtering and meanshift filtering are good for noisy images.
kernel3 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
kernel5 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
kernel7 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(7,7))
t_lo = 136
t_hi = 224
blured = cv2.pyrMeanShiftFiltering(img, 3, 9)
#blured = cv2.bilateralFilter(img, 9, 32, 72)
clahe = cv2.createCLAHE(clipLimit=128.0, tileGridSize=(64, 64))
cl1 = clahe.apply(blured)
cl1 = 255 - cl1
ret, thresh_hi = cv2.threshold(cl1, t_hi, 255, cv2.THRESH_TOZERO)
ret, thresh_lo = cv2.threshold(cl1, t_lo, 255, cv2.THRESH_TOZERO)
Low threshold image
Hi threshold image
Preparations and cleanup:
current = np.copy(thresh_hi)
prev = np.copy(current)
prev[:] = 0
current = cv2.morphologyEx(current, cv2.MORPH_OPEN, kernel5)
iter_num = 0
max_iter = 1000
Not the most efficient way to do that... but easy to implement:
while np.sum(current - prev) > 0 and iter_num < max_iter:
iter_num = iter_num+1
prev = np.copy(current)
current = cv2.dilate(current, kernel3)
tmp = np.copy(current)
current[np.where(thresh_lo == 0)] = 0
Initial mask
Remove small blobs:
contours, hierarchy = cv2.findContours(current, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
area = cv2.contourArea(contour)
if area < 256:
cv2.drawContours( current, [contour], 0, [0,0,0], -1 )
After removing small blobs
Morphological cleanup:
opening = cv2.morphologyEx(current, cv2.MORPH_OPEN, kernel7)
cl1[np.where(opening == 0)] = 0
Result
This is by no means optimal, but I think it should provide you with enough tools to start.
|
Why are Python's arrays slow?
|
I expected array.array to be faster than lists, as arrays seem to be unboxed.
However, I get the following result:
In [1]: import array
In [2]: L = list(range(100000000))
In [3]: A = array.array('l', range(100000000))
In [4]: %timeit sum(L)
1 loop, best of 3: 667 ms per loop
In [5]: %timeit sum(A)
1 loop, best of 3: 1.41 s per loop
In [6]: %timeit sum(L)
1 loop, best of 3: 627 ms per loop
In [7]: %timeit sum(A)
1 loop, best of 3: 1.39 s per loop
What could be the cause of such a difference?
|
The storage is "unboxed", but every time you access an element Python has to "box" it (embed it in a regular Python object) in order to do anything with it. For example, your sum(A) iterates over the array, and boxes each integer, one at a time, in a regular Python int object. That costs time. In your sum(L), all the boxing was done at the time the list was created.
So, in the end, an array is generally slower, but requires substantially less memory.
Here's the relevant code from a recent version of Python 3, but the same basic ideas apply to all CPython implementations since Python was first released.
Here's the code to access a list item:
PyObject *
PyList_GetItem(PyObject *op, Py_ssize_t i)
{
/* error checking omitted */
return ((PyListObject *)op) -> ob_item[i];
}
There's very little to it: somelist[i] just returns the i'th object in the list (and all Python objects in CPython are pointers to a struct whose initial segment conforms to the layout of a struct PyObject).
And here's the __getitem__ implementation for an array with type code l:
static PyObject *
l_getitem(arrayobject *ap, Py_ssize_t i)
{
return PyLong_FromLong(((long *)ap->ob_item)[i]);
}
The raw memory is treated as a vector of platform-native C long integers; the i'th C long is read up; and then PyLong_FromLong() is called to wrap ("box") the native C long in a Python long object (which, in Python 3, which eliminates Python 2's distinction between int and long, is actually shown as type int).
This boxing has to allocate new memory for a Python int object, and spray the native C long's bits into it. In the context of the original example, this object's lifetime is very brief (just long enough for sum() to add the contents into a running total), and then more time is required to deallocate the new int object.
This is where the speed difference comes from, always has come from, and always will come from in the CPython implementation.
|
Numpy: Why doesn't 'a += a.T' work?
|
As stated in scipy lecture notes, this will not work as expected:
a = np.random.randint(0, 10, (1000, 1000))
a += a.T
assert np.allclose(a, a.T)
But why? How does being a view affect this behavior?
|
This problem is due to internal designs of numpy.
It basically boils down to that the inplace operator will change the values as it goes, and then those changed values will be used where you were actually intending for the original value to be used.
This is discussed in this bug report, and it does not seem to be fixable.
The reason why it works for smaller size arrays seems to be because of how the data is buffered while being worked on.
To exactly understand why the issue crops up, I am afraid you will have to dig into the internals of numpy.
|
What exactly is __weakref__ in Python?
|
Surprisingly, there's no explicit documentation for __weakref__. Weak references are explained here. __weakref__ is also shortly mentioned in the documentation of __slots__. But I could not find anything about __weakref__ itself.
What exactly is __weakref__?
- Is it just a member acting as a flag: If present, the object may be weakly-referenced?
- Or is it a function/variable that can be overridden/assigned to get a desired behavior? How?
|
[Edit 1: Explain the linked list nature and when weakrefs are re-used]
Interestingly enough, the official documentation is somewhat non-enlightening on this topic:
Without a __weakref__ variable for each instance, classes defining __slots__ do not support weak references to its instances. If weak reference support is needed, then add __weakref__ to the sequence of strings in the __slots__ declaration.
The type object documentation on the topic does not seem to help things along too much:
When a typeâs __slots__ declaration contains a slot named __weakref__, that slot becomes the weak reference list head for instances of the type, and the slotâs offset is stored in the typeâs tp_weaklistoffset.
Weak references form a linked list. The head of that list (the first weak reference to an object) is available via __weakref__. Weakrefs are re-used whenever possible, so the list (not a Python list!) typically is either empty or contains a single element.
Example:
When you first use weakref.ref(), you create a new weak reference chain for the target object. The head of this chain is the new weakref and gets stored in the target object's __weakref__:
>>> a = A()
>>> b = weakref.ref(a)
>>> c = weakref.ref(b)
>>> print(b is c is a.__weakref__)
True
As we can see, b is re-used. We can force python to create a new weakref, by e.g. adding a callback parameter:
>>> def callback():
>>> pass
>>> a = A()
>>> b = weakref.ref(a)
>>> c = weakref.ref(b, callback)
>>> print(b is c is a.__weakref__)
False
Now b is a.__weakref__, and c is the second reference in the chain. The reference chain is not directly accessible from Python code. We see only the head element of the chain (b), but not how the chain continues (b -> c).
So __weakref__ is the head of the internal linked list of all the weak references to the object. I cannot find any piece of official documentation where this role of __weakref__ is concisely explained, so one should probably not rely on this behavior, as it is an implementation detail.
|
On what CPU cores are my Python processes running?
|
The setup
I have written a pretty complex piece of software in Python (on a Windows PC). My software starts basically two Python interpreter shells. The first shell starts up (I suppose) when you double click the main.py file. Within that shell, other threads are started in the following way:
# Start TCP_thread
TCP_thread = threading.Thread(name = 'TCP_loop', target = TCP_loop, args = (TCPsock,))
TCP_thread.start()
# Start UDP_thread
UDP_thread = threading.Thread(name = 'UDP_loop', target = UDP_loop, args = (UDPsock,))
TCP_thread.start()
The Main_thread starts a TCP_thread and a UDP_thread. Although these are separate threads, they all run within one single Python shell.
The Main_threadalso starts a subprocess. This is done in the following way:
p = subprocess.Popen(['python', mySubprocessPath], shell=True)
From the Python documentation, I understand that this subprocess is running simultaneously (!) in a separate Python interpreter session/shell. The Main_threadin this subprocess is completely dedicated to my GUI. The GUI starts a TCP_thread for all its communications.
I know that things get a bit complicated. Therefore I have summarized the whole setup in this figure:
I have several questions concerning this setup. I will list them down here:
Question 1 [Solved]
Is it true that a Python interpreter uses only one CPU core at a time to run all the threads? In other words, will the Python interpreter session 1 (from the figure) run all 3 threads (Main_thread, TCP_thread and UDP_thread) on one CPU core?
Answer: yes, this is true. The GIL (Global Interpreter Lock) ensures that all threads run on one CPU core at a time.
Question 2 [Not yet solved]
Do I have a way to track which CPU core it is?
Question 3 [Partly solved]
For this question we forget about threads, but we focus on the subprocess mechanism in Python. Starting a new subprocess implies starting up a new Python interpreter instance. Is this correct?
Answer: Yes this is correct. At first there was some confusion about whether the following code would create a new Python interpreter instance:
p = subprocess.Popen(['python', mySubprocessPath], shell = True)
The issue has been clarified. This code indeed starts a new Python interpreter instance.
Will Python be smart enough to make that separate Python interpreter instance run on a different CPU core? Is there a way to track which one, perhaps with some sporadic print statements as well?
Question 4 [New question]
The community discussion raised a new question. There are apparently two approaches when spawning a new process (within a new Python interpreter instance):
# Approach 1(a)
p = subprocess.Popen(['python', mySubprocessPath], shell = True)
# Approach 1(b) (J.F. Sebastian)
p = subprocess.Popen([sys.executable, mySubprocessPath])
# Approach 2
p = multiprocessing.Process(target=foo, args=(q,))
The second approach has the obvious downside that it targets just a function - whereas I need to open up a new Python script. Anyway, are both approaches similar in what they achieve?
|
Q: Is it true that a Python interpreter uses only one CPU core at a time to run all the threads?
No. GIL and CPU affinity are unrelated concepts. GIL can be released during blocking I/O operations, long CPU intensive computations inside a C extension anyway.
If a thread is blocked on GIL; it is probably not on any CPU core and therefore it is fair to say that pure Python multithreading code may use only one CPU core at a time on CPython implementation.
Q: In other words, will the Python interpreter session 1 (from the figure) run all 3 threads (Main_thread, TCP_thread and UDP_thread) on one CPU core?
I don't think CPython manages CPU affinity implicitly. It is likely relies on OS scheduler to choose where to run a thread. Python threads are implemented on top of real OS threads.
Q: Or is the Python interpreter able to spread them over multiple cores?
To find out the number of usable CPUs:
>>> import os
>>> len(os.sched_getaffinity(0))
16
Again, whether or not threads are scheduled on different CPUs does not depend on Python interpreter.
Q: Suppose that the answer to Question 1 is 'multiple cores', do I have a way to track on which core each thread is running, perhaps with some sporadic print statements? If the answer to Question 1 is 'only one core', do I have a way to track which one it is?
I imagine, a specific CPU may change from one time-slot to another. You could look at something like /proc/<pid>/task/<tid>/status on old Linux kernels. On my machine, task_cpu can be read from /proc/<pid>/stat or /proc/<pid>/task/<tid>/stat:
>>> open("/proc/{pid}/stat".format(pid=os.getpid()), 'rb').read().split()[-14]
'4'
For a current portable solution, see whether psutil exposes such info.
You could restrict the current process to a set of CPUs:
os.sched_setaffinity(0, {0}) # current process on 0-th core
Q: For this question we forget about threads, but we focus on the subprocess mechanism in Python. Starting a new subprocess implies starting up a new Python interpreter session/shell. Is this correct?
Yes. subprocess module creates new OS processes. If you run python executable then it starts a new Python interpeter. If you run a bash script then no new Python interpreter is created i.e., running bash executable does not start a new Python interpreter/session/etc.
Q: Supposing that it is correct, will Python be smart enough to make that separate interpreter session run on a different CPU core? Is there a way to track this, perhaps with some sporadic print statements as well?
See above (i.e., OS decides where to run your thread and there could be OS API that exposes where the thread is run).
multiprocessing.Process(target=foo, args=(q,)).start()
multiprocessing.Process also creates a new OS process (that runs a new Python interpreter).
In reality, my subprocess is another file. So this example won't work for me.
Python uses modules to organize the code. If your code is in another_file.py then import another_file in your main module and pass another_file.foo to multiprocessing.Process.
Nevertheless, how would you compare it to p = subprocess.Popen(..)? Does it matter if I start the new process (or should I say 'python interpreter instance') with subprocess.Popen(..)versus multiprocessing.Process(..)?
multiprocessing.Process() is likely implemented on top of subprocess.Popen(). multiprocessing provides API that is similar to threading API and it abstracts away details of communication between python processes (how Python objects are serialized to be sent between processes).
If there are no CPU intensive tasks then you could run your GUI and I/O threads in a single process. If you have a series of CPU intensive tasks then to utilize multiple CPUs at once, either use multiple threads with C extensions such as lxml, regex, numpy (or your own one created using Cython) that can release GIL during long computations or offload them into separate processes (a simple way is to use a process pool such as provided by concurrent.futures).
Q: The community discussion raised a new question. There are apparently two approaches when spawning a new process (within a new Python interpreter instance):
# Approach 1(a)
p = subprocess.Popen(['python', mySubprocessPath], shell = True)
# Approach 1(b) (J.F. Sebastian)
p = subprocess.Popen([sys.executable, mySubprocessPath])
# Approach 2
p = multiprocessing.Process(target=foo, args=(q,))
"Approach 1(a)" is wrong on POSIX (though it may work on Windows). For portability, use "Approach 1(b)" unless you know you need cmd.exe (pass a string in this case, to make sure that the correct command-line escaping is used).
The second approach has the obvious downside that it targets just a function - whereas I need to open up a new Python script. Anyway, are both approaches similar in what they achieve?
subprocess creates new processes, any processes e.g., you could run a bash script. multprocessing is used to run Python code in another process. It is more flexible to import a Python module and run its function than to run it as a script. See Call python script with input with in a python script using subprocess.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.