instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
---|---|---|
pip installation /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
|
I don't know what's the deal but I am stuck following some stackoverflow solutions which gets nowhere. Can you please help me on this?
Monas-MacBook-Pro:CS764 mona$ sudo python get-pip.py
The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
/tmp/tmpbSjX8k/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
Collecting pip
Downloading pip-7.1.0-py2.py3-none-any.whl (1.1MB)
100% |ââââââââââââââââââââââââââââââââ| 1.1MB 181kB/s
Installing collected packages: pip
Found existing installation: pip 1.4.1
Uninstalling pip-1.4.1:
Successfully uninstalled pip-1.4.1
Successfully installed pip-7.1.0
Monas-MacBook-Pro:CS764 mona$ pip --version
-bash: /usr/local/bin/pip: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
|
I had used home-brew to install 2.7 on OS X 10.10 and the new install was missing the sym links. I ran
brew link --overwrite python
as mentioned in How to symlink python in Homebrew? and it solved the problem.
|
Stop Django from creating migrations if the list of choices of a field changes
|
I have a django core app called "foocore".
There are several optional pluging-like apps. For example "superfoo".
In my case every plugin adds a new choice in a model CharField which belongs to "foocore".
Django migrations detect changes if the list of choices get changed.
I think this is not necessary. At least one other developer thinks the same:
https://code.djangoproject.com/ticket/22837
class ActivePlugin(models.Model):
plugin_name = models.CharField(max_length=32, choices=get_active_plugins())
The code to get the choices:
class get_active_plugins(object):
def __iter__(self):
for item in ....:
yield item
The core "foocore" gets used in several projects and every installation has a different set of plugins. Django tries to create useless migrations ....
Is there a way to work around this?
|
See this bug report and discussion for more info: https://code.djangoproject.com/ticket/22837
The proposed solution was to use a callable as the argument for choices, but it appears this has not been executed for fields but for forms only.
If you really need dynamic choices than a ForeignKey is the best solution.
An alternative solution can be to add the requirement through a custom clean method for the field and/or creating a custom form. Form fields do support callable choices.
See this answer for more info: http://stackoverflow.com/a/33514551/54017
|
How does python "know" what to do with the "in" keyword?
|
I'm a bit bewildered by the "in" keyword in python.
If I take a sample list of tuples:
data = [
(5, 1, 9.8385465),
(10, 1, 8.2087544),
(15, 1, 7.8788187),
(20, 1, 7.5751283)
]
I can do two different "for - in" loops and get different results:
for G,W,V in data:
print G,W,V
This prints each set of values on a line, e.g. 5, 1, 9.8385465
for i in data:
print i
This prints the whole tuple, e.g. (5, 1, 9.8385465)
How does python "know" that by providing one variable I want to assign the tuple to a variable, and that by providing three variables I want to assign each value from the tuple to one of those variables?
|
According to the for compound statement documentation:
Each item in turn is assigned to the target list using the standard
rules for assignments...
Those "standard rules" are in the assignment statement documentation, specifically:
Assignment of an object to a target list is recursively defined as
follows.
If the target list is a single target: The object is assigned to that target.
If the target list is a comma-separated list of targets: The object must be an iterable with the same number of items as there are targets
in the target list, and the items are assigned, from left to right, to
the corresponding targets.
So this different behaviour, depending on whether you assign to a single target or a list of targets, is baked right into Python's fundamentals, and applies wherever assignment is used.
|
Python 2.7: round number to nearest integer
|
I've been trying to round long float numbers like:
32.268907563;
32.268907563;
31.2396694215;
33.6206896552;
...
With no success so far. I tried math.ceil(x) , math.floor(x) (although that would round up or down, which is not what I'm looking for) and round(x) which didn't work either (still float numbers).
What could I do?
EDIT: CODE:
for i in widthRange:
for j in heightRange:
r, g, b = rgb_im.getpixel((i, j))
h, s, v = colorsys.rgb_to_hsv(r/255.0, g/255.0, b/255.0)
h = h * 360
int(round(h))
print h # Debug
|
int(round(x))
Will round it and change it to integer
EDIT:
You are not assigning int(round(h)) to any variable. When you call int(round(h)), it returns the integer number but does nothing else; you have to change that line for:
h = int(round(h))
To assign the new value to h
|
Classifying Python array by nearest "seed" region?
|
I have a raster of ecological habitats which I've converted into a two-dimensional Python numpy array (example_array below). I also have an array containing "seed" regions with unique values (seed_array below) which I'd like to use to classify my habitat regions. I'd like to 'grow' my seed regions 'into' my habitat regions such that habitats are assigned the ID of the nearest seed region, as measured 'through' the habitat regions. For example:
My best approach used the ndimage.distance_transform_edt function to create an array depicting the nearest "seed" region to each cell in the dataset, which was then substituted back into the habitat array. This doesn't work particularly well, however, as the function doesn't measure distances "through" my habitat regions, for example below where the red circle represents an incorrectly classified cell:
Below are sample arrays for my habitat and seed data, and an example of the kind of output I'm looking for. My actual datasets are much larger - over a million habitat/seed regions. Any help would be much appreciated!
import numpy as np
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
# Sample study area array
example_array = np.array([[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
# Plot example array
plt.imshow(example_array, cmap="spectral", interpolation='nearest')
seed_array = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 0, 0, 2, 2, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
# Plot seeds
plt.imshow(seed_array, cmap="spectral", interpolation='nearest')
desired_output = np.array([[0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 4, 4, 4, 0, 0, 0, 3, 3, 3],
[0, 0, 0, 0, 4, 4, 0, 0, 0, 3, 3, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 3, 3],
[1, 1, 0, 1, 0, 0, 0, 0, 2, 2, 3, 3],
[1, 1, 1, 1, 0, 0, 2, 2, 2, 0, 0, 3],
[1, 1, 1, 1, 1, 2, 2, 2, 2, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 2, 2, 2, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
# Plot desired output
plt.imshow(desired_output, cmap="spectral", interpolation='nearest')
|
You can use watershed segmentation from scikits-image:
Distance transform
from scipy import ndimage as nd
distance = nd.distance_transform_edt(example_array)
Watershed segmentation
from skimage.morphology import watershed, square
result = watershed(-distance, seed_array, mask=example_array, \
connectivity=square(3))
Result
subplot(1,2,1)
imshow(-distance, 'spectral', interpolation='none')
subplot(1,2,2)
imshow(result, 'spectral', interpolation='none')
As another variant, and following your initial approach, you can use watershed to find connected neighbours to nearest seeds. As you mentioned in the question:
Calculate distance to the seeds:
distance = nd.distance_transform_edt(seed_array == 0)
Calculate watershed in the distance space:
result = watershed(distance, seed_array, mask=example_array, \
connectivity=square(3))
Plot result:
figure(figsize=(9,3))
subplot(1,3,1)
imshow(distance, 'jet', interpolation='none')
subplot(1,3,2)
imshow(np.ma.masked_where(example_array==0, distance), 'jet', interpolation='none')
subplot(1,3,3)
imshow(result, 'spectral', interpolation='none')
Further discussion: Watershed method tries to grow regions from seeded peaks by flowing through the image gradient. As your image is binary, the regions will expand equally in all directions from the seeded points, and thus give you the point in between two regions. For more info about watershed refer to wikipedia.
In the first example, the distance transform is calculated in the original image, and thus the regions expand equally from seeds until they achieve the splitting point in the middle.
In the second example, the distance transform is calculated from all the pixels to any of the seeded points, and then applying watershed in that space. Watershed basically will assign each pixel to its nearest seed, but it will add a connectivity constrain.
NOTE the sign difference in the distance maps in both plotting and watersed.
NOTE In distance maps (left image in both plots), blue means close where red means far.
|
What's the best way to share Jupyter notebooks with non-programmers?
|
I am trying to wrap my head around what I can/cannot do with Jupyter.
I have a Jupyter server running on our internal server, accessible via VPN and password protected.
I am the only one actually creating notebooks but I would like to make some notebooks visible to other team members in a read-only way. Ideally I could just share a URL with them that they would bookmark for when they want to see the notebook with refreshed data.
I saw export options but cannot find any mention of "publishing" or "making public" local live notebooks. Is this impossible? Is it maybe just a wrong way to think about how Jupyter should be used? Are their best practices around this?
|
The "best" way to share a Jupyter notebook is to simply to place it on GitHub (and view it directly) or some other public link and use the Jupyter Notebook Viewer. When privacy is more of an issue then there are alternatives but it's certainly more complex, there's no built in way to do this in Jupyter alone but a couple of options are:
Host your own nbviewer
GitHub and the Jupyter Notebook Veiwer both use the same tool to render .ipynb files into static HTML, this tool is nbviewer.
The installation instructions are more complex than I'm willing to go into here but if your company/team has a shared server that doesn't require password access then you could host the nbviewer on that server and direct it to load from your credentialed server. This will probably require some more advanced configuration than you're going to find in the docs.
Set up a deployment script
If you don't necessarily need live updating HTML then you could set up a script on your credentialed server that will simply use Jupyter's built in export options to create the static HTML files and then send those to a more publicly accessible server.
Good luck!
|
Pandas DataFrame: How to natively get minimum across range of rows and columns
|
I have a Pandas DataFrame that looks similar to this but with 10,000 rows and 500 columns.
For each row, I would like to find the minimum value between 3 days ago at 15:00 and today at 13:30.
Is there some native numpy way to do this quickly?
My goal is to be able to get the minimum value for each row by saying something like "what is the minimum value from 3 days ago ago 15:00 to 0 days ago (aka today) 13:30?"
For this particular example the answers for the last two rows would be:
2011-01-09 2481.22
2011-01-10 2481.22
My current way is this:
1. Get the earliest row (only the values after the start time)
2. Get the middle rows
3. Get the last row (only the values before the end time)
4. Concat (1), (2), and (3)
5. Get the minimum of (4)
But this takes a very long time on a large DataFrame
The following code will generate a similar DF:
import numpy
import pandas
import datetime
numpy.random.seed(0)
random_numbers = (numpy.random.rand(10, 8)*100 + 2000)
columns = [datetime.time(13,0) , datetime.time(13,30), datetime.time(14,0), datetime.time(14,30) , datetime.time(15,0), datetime.time(15,30) ,datetime.time(16,0), datetime.time(16,30)]
index = pandas.date_range('2011/1/1', '2011/1/10')
df = pandas.DataFrame(data = random_numbers, columns=columns, index = index).astype(int)
print df
Here is the json version of the dataframe:
'{"13:00:00":{"1293840000000":2085,"1293926400000":2062,"1294012800000":2035,"1294099200000":2086,"1294185600000":2006,"1294272000000":2097,"1294358400000":2078,"1294444800000":2055,"1294531200000":2023,"1294617600000":2024},"13:30:00":{"1293840000000":2045,"1293926400000":2039,"1294012800000":2035,"1294099200000":2045,"1294185600000":2025,"1294272000000":2099,"1294358400000":2028,"1294444800000":2028,"1294531200000":2034,"1294617600000":2010},"14:00:00":{"1293840000000":2095,"1293926400000":2006,"1294012800000":2001,"1294099200000":2032,"1294185600000":2022,"1294272000000":2040,"1294358400000":2024,"1294444800000":2070,"1294531200000":2081,"1294617600000":2095},"14:30:00":{"1293840000000":2057,"1293926400000":2042,"1294012800000":2018,"1294099200000":2023,"1294185600000":2025,"1294272000000":2016,"1294358400000":2066,"1294444800000":2041,"1294531200000":2098,"1294617600000":2023},"15:00:00":{"1293840000000":2082,"1293926400000":2025,"1294012800000":2040,"1294099200000":2061,"1294185600000":2013,"1294272000000":2063,"1294358400000":2024,"1294444800000":2036,"1294531200000":2096,"1294617600000":2068},"15:30:00":{"1293840000000":2090,"1293926400000":2084,"1294012800000":2092,"1294099200000":2003,"1294185600000":2001,"1294272000000":2049,"1294358400000":2066,"1294444800000":2082,"1294531200000":2090,"1294617600000":2005},"16:00:00":{"1293840000000":2081,"1293926400000":2003,"1294012800000":2009,"1294099200000":2001,"1294185600000":2011,"1294272000000":2098,"1294358400000":2051,"1294444800000":2092,"1294531200000":2029,"1294617600000":2073},"16:30:00":{"1293840000000":2015,"1293926400000":2095,"1294012800000":2094,"1294099200000":2042,"1294185600000":2061,"1294272000000":2006,"1294358400000":2042,"1294444800000":2004,"1294531200000":2099,"1294617600000":2088}}'
|
You can first stack the DataFrame to create a series and then index slice it as required and take the min. For example:
first, last = ('2011-01-07', datetime.time(15)), ('2011-01-10', datetime.time(13, 30))
df.stack().loc[first: last].min()
The result of df.stack is a Series with a MultiIndex where the inner level is composed of the original columns. We then slice using tuple pairs with the start and end date and times.
If you're going to be doing lots of such operations then you should consider assigning df.stack() to some variable. You might then consider changing the index to a proper DatetimeIndex. Then you can work with both the time series and the grid format as required.
Here's another method which avoids stacking and is a lot faster on DataFrames of the size you're actually working with (as a one-off; slicing the stacked DataFrame is a lot faster once it's stacked so if you're doing many of these operations you should stack and convert the index).
It's less general as it works with min and max but not with, say, mean. It gets the min of the subset of the first and last rows and the min of the rows in between (if any) and takes the min of these three candidates.
first_row = df.index.get_loc(first[0])
last_row = df.index.get_loc(last[0])
if first_row == last_row:
result = df.loc[first[0], first[1]: last[1]].min()
elif first_row < last_row:
first_row_min = df.loc[first[0], first[1]:].min()
last_row_min = df.loc[last[0], :last[1]].min()
middle_min = df.iloc[first_row + 1:last_row].min().min()
result = min(first_row_min, last_row_min, middle_min)
else:
raise ValueError('first row must be <= last row')
Note that if first_row + 1 == last_row then middle_min is nan but the result is still correct as long as middle_min doesn't come first in the call to min.
|
Why are .pyc files created on import?
|
I've seen several resources describing what .pyc files are and when they're created. But now I'm wondering why they're created when .py files are imported?
Also, why not create a .pyc file for the main Python file doing the importing?
I'm guessing it has to do with performance optimization and learning this has encouraged me to break out my files since the built-in compilation seems nice to take advantage of. But I'm not sure if this is the case, and I'm also curious if anyone has stats for the difference between running programs with and without the .pyc files if it is indeed for speed.
I'd run them myself but I don't have a good, large Python codebase to test it on. :(
|
Python source code is compiled to bytecode, and it is the bytecode that is run. A .pyc file contains a copy of that bytecode, and by caching that Python doesn't have to re-compile the Python code each time it needs to load the module.
You can get an idea of how much time is saved by timing the compile() function:
>>> import urllib2
>>> import timeit
>>> urllib2_source = open(urllib2.__file__.rstrip('c')).read()
>>> timeit.timeit("compile(source, '', 'exec')", 'from __main__ import urllib2_source as source', number=1000)
6.977046966552734
>>> _ / 1000.0
0.006977046966552734
So it takes 7 milliseconds to compile the urllib2.py source code. That doesn't sound like much, but this adds up quickly as Python loads a lot of modules in its lifetime. Just run an average script with the -v command-line switch; here I run the help output for the pydoc tool:
$ bin/python -v -m pydoc -h
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site.py
import site # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/os.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/os.py
import os # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/os.pyc
import errno # builtin
import posix # builtin
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.py
import posixpath # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/stat.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/stat.py
import stat # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/stat.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/genericpath.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/genericpath.py
import genericpath # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/genericpath.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/warnings.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/warnings.py
import warnings # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/warnings.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/linecache.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/linecache.py
import linecache # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/linecache.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/types.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/types.py
import types # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/types.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/UserDict.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/UserDict.py
import UserDict # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/UserDict.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_abcoll.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_abcoll.py
import _abcoll # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_abcoll.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/abc.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/abc.py
import abc # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/abc.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_weakrefset.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_weakrefset.py
import _weakrefset # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_weakrefset.pyc
import _weakref # builtin
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/copy_reg.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/copy_reg.py
import copy_reg # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/copy_reg.pyc
import encodings # directory /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/__init__.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/__init__.py
import encodings # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/__init__.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/codecs.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/codecs.py
import codecs # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/codecs.pyc
import _codecs # builtin
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/aliases.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/aliases.py
import encodings.aliases # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/aliases.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/utf_8.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/utf_8.py
import encodings.utf_8 # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/utf_8.pyc
Python 2.7.8 (default, Sep 9 2014, 11:33:29)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/runpy.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/runpy.py
import runpy # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/runpy.pyc
import imp # builtin
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/pkgutil.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/pkgutil.py
import pkgutil # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/pkgutil.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py
import re # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_compile.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_compile.py
import sre_compile # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_compile.pyc
import _sre # builtin
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_parse.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_parse.py
import sre_parse # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_parse.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_constants.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_constants.py
import sre_constants # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_constants.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_locale.so", 2);
import _locale # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_locale.so
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/inspect.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/inspect.py
import inspect # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/inspect.pyc
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/string.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/string.py
import string # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/string.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/strop.so", 2);
import strop # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/strop.so
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/dis.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/dis.py
import dis # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/dis.pyc
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/opcode.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/opcode.py
import opcode # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/opcode.pyc
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/tokenize.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/tokenize.py
import tokenize # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/tokenize.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/itertools.so", 2);
import itertools # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/itertools.so
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/token.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/token.py
import token # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/token.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/operator.so", 2);
import operator # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/operator.so
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/collections.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/collections.py
import collections # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/collections.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_collections.so", 2);
import _collections # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_collections.so
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/keyword.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/keyword.py
import keyword # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/keyword.pyc
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/heapq.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/heapq.py
import heapq # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/heapq.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_heapq.so", 2);
import _heapq # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_heapq.so
import thread # builtin
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/repr.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/repr.py
import repr # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/repr.pyc
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/traceback.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/traceback.py
import traceback # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/traceback.pyc
# /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/locale.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/locale.py
import locale # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/locale.pyc
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.py
import functools # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.pyc
dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_functools.so", 2);
import _functools # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_functools.so
# /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/getopt.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/getopt.py
import getopt # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/getopt.pyc
pydoc - the Python documentation tool
pydoc.py <name> ...
Show text documentation on something. <name> may be the name of a
Python keyword, topic, function, module, or package, or a dotted
reference to a class or function within a module or module in a
package. If <name> contains a '/', it is used as the path to a
Python source file to document. If name is 'keywords', 'topics',
or 'modules', a listing of these things is displayed.
pydoc.py -k <keyword>
Search for a keyword in the synopsis lines of all available modules.
pydoc.py -p <port>
Start an HTTP server on the given port on the local machine.
pydoc.py -g
Pop up a graphical interface for finding and serving documentation.
pydoc.py -w <name> ...
Write out the HTML documentation for a module to a file in the current
directory. If <name> contains a '/', it is treated as a filename; if
it names a directory, documentation is written for all the contents.
# clear __builtin__._
# clear sys.path
# clear sys.argv
# clear sys.ps1
# clear sys.ps2
# clear sys.exitfunc
# clear sys.exc_type
# clear sys.exc_value
# clear sys.exc_traceback
# clear sys.last_type
# clear sys.last_value
# clear sys.last_traceback
# clear sys.path_hooks
# clear sys.path_importer_cache
# clear sys.meta_path
# clear sys.flags
# clear sys.float_info
# restore sys.stdin
# restore sys.stdout
# restore sys.stderr
# cleanup __main__
# cleanup[1] _collections
# cleanup[1] locale
# cleanup[1] functools
# cleanup[1] encodings
# cleanup[1] site
# cleanup[1] runpy
# cleanup[1] operator
# cleanup[1] supervisor
# cleanup[1] _heapq
# cleanup[1] abc
# cleanup[1] _weakrefset
# cleanup[1] sre_constants
# cleanup[1] collections
# cleanup[1] _codecs
# cleanup[1] opcode
# cleanup[1] _warnings
# cleanup[1] mpl_toolkits
# cleanup[1] inspect
# cleanup[1] encodings.utf_8
# cleanup[1] repr
# cleanup[1] codecs
# cleanup[1] getopt
# cleanup[1] pkgutil
# cleanup[1] _functools
# cleanup[1] thread
# cleanup[1] keyword
# cleanup[1] strop
# cleanup[1] signal
# cleanup[1] traceback
# cleanup[1] itertools
# cleanup[1] posix
# cleanup[1] encodings.aliases
# cleanup[1] exceptions
# cleanup[1] _weakref
# cleanup[1] token
# cleanup[1] dis
# cleanup[1] tokenize
# cleanup[1] heapq
# cleanup[1] string
# cleanup[1] imp
# cleanup[1] zipimport
# cleanup[1] re
# cleanup[1] _locale
# cleanup[1] sre_compile
# cleanup[1] _sre
# cleanup[1] sre_parse
# cleanup[2] copy_reg
# cleanup[2] posixpath
# cleanup[2] errno
# cleanup[2] _abcoll
# cleanup[2] types
# cleanup[2] genericpath
# cleanup[2] stat
# cleanup[2] warnings
# cleanup[2] UserDict
# cleanup[2] os.path
# cleanup[2] linecache
# cleanup[2] os
# cleanup sys
# cleanup __builtin__
# cleanup ints: 21 unfreed ints
# cleanup floats
That's 53 imports:
$ bin/python -v -m pydoc -h 2>&1 | egrep ^import | wc -l
53
rather than load the (larger) source file each time and compiling it, a smaller bytecode file can be read and used immediately. That easily adds up to 1/3rd or 1/2 a second just to print some help information for a command-line tool.
Python does not create a cache file for the main script; that's because that would clutter up your scripts directory with files that are not going to be loaded nearly as often as modules are loaded.
If you run a script that often that the compile time for that one file affects you, you can always either move the majority of the code to a module (and avoid having to compile a large script) or you can use the compileall tool to create a .pyc cache file for the script, then run that .pyc file directly. Note that Python then will not recompile that file if you changed the script!
|
"OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)
|
I'm trying to install Scrapy Python framework in OSX 10.11 (El Capitan) via pip. The installation script downloads the required modules and at some point returns the following error:
OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
I've tried to deactivate the rootless feature in OSX 10.11 with the command:
sudo nvram boot-args="rootless=0";sudo reboot
but I still get the same error when the machine reboots.
Any clue or idea from my fellow StackExchangers?
If it helps, the full script output is the following:
sudo -s pip install scrapy
Collecting scrapy
Downloading Scrapy-1.0.2-py2-none-any.whl (290kB)
100% |ââââââââââââââââââââââââââââââââ| 290kB 345kB/s
Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /Library/Python/2.7/site-packages (from scrapy)
Requirement already satisfied (use --upgrade to upgrade): queuelib in /Library/Python/2.7/site-packages (from scrapy)
Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from scrapy)
Collecting w3lib>=1.8.0 (from scrapy)
Downloading w3lib-1.12.0-py2.py3-none-any.whl
Collecting lxml (from scrapy)
Downloading lxml-3.4.4.tar.gz (3.5MB)
100% |ââââââââââââââââââââââââââââââââ| 3.5MB 112kB/s
Collecting Twisted>=10.0.0 (from scrapy)
Downloading Twisted-15.3.0.tar.bz2 (4.4MB)
100% |ââââââââââââââââââââââââââââââââ| 4.4MB 94kB/s
Collecting six>=1.5.2 (from scrapy)
Downloading six-1.9.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from Twisted>=10.0.0->scrapy)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from zope.interface>=3.6.0->Twisted>=10.0.0->scrapy)
Installing collected packages: six, w3lib, lxml, Twisted, scrapy
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
Thank you!
|
pip install --ignore-installed six
Would do the trick.
Source: github.com/pypa/pip/issues/3165
|
Why is 'a' in ('abc') True while 'a' in ['abc'] is False?
|
When using the interpreter, the expression 'a' in ('abc') returns True, while 'a' in ['abc'] returns False. Can somebody explain this behaviour?
|
('abc') is the same as 'abc'. 'abc' contains the substring 'a', hence 'a' in 'abc' == True.
If you want the tuple instead, you need to write ('abc', ).
['abc'] is a list (containing a single element, the string 'abc'). 'a' is not a member of this list, so 'a' in ['abc'] == False
|
Interactive plots placement in ipython notebook widget
|
I've got two plots which I'd like to make interactive with ipython notebook widgets. The code below is a simplified sample of what I'm trying to do.
import matplotlib.pyplot as plt
import IPython.html.widgets as wdg
def displayPlot1(rngMax = 10):
plt.figure(0)
plt.plot([x for x in range(0, rngMax)])
wdg1 = wdg.interactive(displayPlot1, rngMax = wdg.IntSlider(20))
def displayPlot2(rngMax = 10):
plt.figure(1)
plt.plot([x**2 for x in range(0, rngMax)])
wdg2 = wdg.interactive(displayPlot2, rngMax = wdg.IntSlider(10))
wdg.ContainerWidget([wdg.HTML("""<h1>First Plot</h1>"""),
wdg1,
wdg.HTML("""<h1>Second Plot</h1>"""),
wdg2])
The first problem is that it displays all the widgets first, and two plots one after another at the end:
title1
widget1
title2
widget2
plot1
plot2
I'd like to have:
title1
widget1
plot1
title2
widget2
plot2
Also it seems the whole output gets overwritten as soon as I touch any of the sliders, and displays one plot only (the one I'm changing).
How do I fix this problem? (I potentially can do it if I separate them into two different cells, however I'm planning to do something more complex and it needs to be in one cell eventually)
|
IPython Notebook displays widgets before any output. One thing you can do is to place your plots inside an HTML widget. This can be placed in any position relative to other widgets.
If you do this however, you need to explicitly need to place your plot within the HTML widget. This can be a bit tricky, but a quick solution would be to save the byte string of the plot to a buffer and then put the byte string in an image tag.
Here's an example (Gist here):
import base64
import io
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
def handle_input1(slope):
plt.figure()
plt.plot([x * slope for x in range(0, 11)])
plt.ylim((0,10))
output1HTML.value = plot_to_html()
plt.close()
def handle_input2(curvature):
plt.figure()
plt.plot([x * x * curvature for x in range(0, 11)])
plt.ylim((0,100))
output2HTML.value = plot_to_html()
plt.close()
def plot_to_html():
# write image data to a string buffer and get the PNG image bytes
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
return """<img src='data:image/png;base64,{}'/>""".format(base64.b64encode(buf.getvalue()).decode('ascii'))
plt.ioff()
heading1HTML = widgets.HTML("""<h1>Slope</h1>""")
input1Float = widgets.FloatSlider(value=0.5, min=0.0, max=1.0, step=0.01, description="slope: ")
widgets.interactive(handle_input1, slope=input1Float)
output1HTML = widgets.HTML()
heading2HTML = widgets.HTML("""<h1>Curvature</h1>""")
input2Float = widgets.FloatSlider(value=0.5, min=0.0, max=1.0, step=0.01, description="curvature: ")
widgets.interactive(handle_input2, curvature=input2Float)
output2HTML = widgets.HTML()
display(widgets.Box([heading1HTML, input1Float, output1HTML, heading2HTML, input2Float, output2HTML]))
handle_input1(input1Float.value)
handle_input2(input2Float.value)
EDIT 1: IPython widgets have been moved to IPython.html; updated the code accordingly.
EDIT 2: Using widgets.interactive as per the latest IPython widgets documentation; updating IPython widget location once more; adding ASCII encoding
|
Why is string's startswith slower than in?
|
Surprisingly, I find startswith is slower than in:
In [10]: s="ABCD"*10
In [11]: %timeit s.startswith("XYZ")
1000000 loops, best of 3: 307 ns per loop
In [12]: %timeit "XYZ" in s
10000000 loops, best of 3: 81.7 ns per loop
As we all know, the in operation needs to search the whole string and startswith just needs to check the first few characters, so startswith should be more efficient.
When s is big enough, startswith is faster:
In [13]: s="ABCD"*200
In [14]: %timeit s.startswith("XYZ")
1000000 loops, best of 3: 306 ns per loop
In [15]: %timeit "XYZ" in s
1000000 loops, best of 3: 666 ns per loop
So it seems that calling startswith has some overhead which makes it slower when the string is small.
And than I tried to figure out what's the overhead of the startswith call.
First, I used an f variable to reduce the cost of the dot operation - as mentioned in this answer - here we can see startswith is still slower:
In [16]: f=s.startswith
In [17]: %timeit f("XYZ")
1000000 loops, best of 3: 270 ns per loop
Further, I tested the cost of an empty function call:
In [18]: def func(a): pass
In [19]: %timeit func("XYZ")
10000000 loops, best of 3: 106 ns per loop
Regardless of the cost of the dot operation and function call, the time of startswith is about (270-106)=164ns, but the in operation takes only 81.7ns. It seems there are still some overheads for startswith, what's that?
Add the test result between startswith and __contains__ as suggested by poke and lvc:
In [28]: %timeit s.startswith("XYZ")
1000000 loops, best of 3: 314 ns per loop
In [29]: %timeit s.__contains__("XYZ")
1000000 loops, best of 3: 192 ns per loop
|
As already mentioned in the comments, if you use s.__contains__("XYZ") you get a result that is more similar to s.startswith("XYZ") because it needs to take the same route: Member lookup on the string object, followed by a function call. This is usually somewhat expensive (not enough that you should worry about of course). On the other hand, when you do "XYZ" in s, the parser interprets the operator and can short-cut the member access to the __contains__ (or rather the implementation behind it, because __contains__ itself is just one way to access the implementation).
You can get an idea about this by looking at the bytecode:
>>> dis.dis('"XYZ" in s')
1 0 LOAD_CONST 0 ('XYZ')
3 LOAD_NAME 0 (s)
6 COMPARE_OP 6 (in)
9 RETURN_VALUE
>>> dis.dis('s.__contains__("XYZ")')
1 0 LOAD_NAME 0 (s)
3 LOAD_ATTR 1 (__contains__)
6 LOAD_CONST 0 ('XYZ')
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 RETURN_VALUE
So comparing s.__contains__("XYZ") with s.startswith("XYZ") will produce a more similar result, however for your example string s, the startswith will still be slower.
To get to that, you could check the implementation of both. Interesting to see for the contains implementation is that it is statically typed, and just assumes that the argument is a unicode object itself. So this is quite efficient.
The startswith implementation however is a âdynamicâ Python method which requires the implementation to actually parse the arguments. startswith also supports a tuple as an argument, which makes the whole start-up of the method a bit slower: (shortened by me, with my comments):
static PyObject * unicode_startswith(PyObject *self, PyObject *args)
{
// argument parsing
PyObject *subobj;
PyObject *substring;
Py_ssize_t start = 0;
Py_ssize_t end = PY_SSIZE_T_MAX;
int result;
if (!stringlib_parse_args_finds("startswith", args, &subobj, &start, &end))
return NULL;
// tuple handling
if (PyTuple_Check(subobj)) {}
// unicode conversion
substring = PyUnicode_FromObject(subobj);
if (substring == NULL) {}
// actual implementation
result = tailmatch(self, substring, start, end, -1);
Py_DECREF(substring);
if (result == -1)
return NULL;
return PyBool_FromLong(result);
}
This is likely a big reason why startswith is slower for strings for which a contains is fast because of its simplicity.
|
Boto3 to download all files from a S3 Bucket
|
I'm using boto3 to get files from s3 bucket. I need a similar functionality like aws s3 sync
My current code is
#!/usr/bin/python
import boto3
s3=boto3.client('s3')
list=s3.list_objects(Bucket='my_bucket_name')['Contents']
for key in list:
s3.download_file('my_bucket_name', key['Key'], key['Key'])
This is working fine, as long as the bucket has only files.
If a folder is present inside the bucket, its throwing an error
Traceback (most recent call last):
File "./test", line 6, in <module>
s3.download_file('my_bucket_name', key['Key'], key['Key'])
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/inject.py", line 58, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 651, in download_file
extra_args, callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 666, in _download_file
self._get_object(bucket, key, filename, extra_args, callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 690, in _get_object
extra_args, callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 707, in _do_get_object
with self._osutil.open(filename, 'wb') as f:
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 323, in open
return open(filename, mode)
IOError: [Errno 2] No such file or directory: 'my_folder/.8Df54234'
Is this a proper way to download a complete s3 bucket using boto3. How to download folders.
|
I got the same needs and create the following function that download recursively the files.
The directories are created locally only if they contain files.
import boto3
import os
def download_dir(client, resource, dist, local='/tmp', bucket='your_bucket'):
paginator = client.get_paginator('list_objects')
for result in paginator.paginate(Bucket=bucket, Delimiter='/', Prefix=dist):
if result.get('CommonPrefixes') is not None:
for subdir in result.get('CommonPrefixes'):
download_dir(client, resource, subdir.get('Prefix'), local)
if result.get('Contents') is not None:
for file in result.get('Contents'):
if not os.path.exists(os.path.dirname(local + os.sep + file.get('Key'))):
os.makedirs(os.path.dirname(local + os.sep + file.get('Key')))
resource.meta.client.download_file(bucket, file.get('Key'), local + os.sep + file.get('Key'))
The function is called that way:
def _start():
client = boto3.client('s3')
resource = boto3.resource('s3')
download_dir(client, resource, 'clientconf/', '/tmp')
|
How to convert dictionary values to int in Python?
|
I have a program that returns a set of domains with ranks like so:
ranks = [
{'url': 'example.com', 'rank': '11,279'},
{'url': 'facebook.com', 'rank': '2'},
{'url': 'google.com', 'rank': '1'}
]
I'm trying to sort them by ascending rank with sorted:
results = sorted(ranks,key=itemgetter("rank"))
However, since the values of "rank" are strings, then it sorts them alphanumerically instead of by ascending value:
1. google.com: 1
2. example.com: 11,279
3. facebook.com: 2
I need to convert the values of only the "rank" key to integers so that they'll sort correctly. Any ideas?
|
You are almost there. You need to convert the picked values to integers after replacing ,, like this
results = sorted(ranks, key=lambda x: int(x["rank"].replace(",", "")))
For example,
>>> ranks = [
... {'url': 'example.com', 'rank': '11,279'},
... {'url': 'facebook.com', 'rank': '2'},
... {'url': 'google.com', 'rank': '1'}
... ]
>>> from pprint import pprint
>>> pprint(sorted(ranks, key=lambda x: int(x["rank"].replace(",", ""))))
[{'rank': '1', 'url': 'google.com'},
{'rank': '2', 'url': 'facebook.com'},
{'rank': '11,279', 'url': 'example.com'}]
Note: I just used pprint function to pretty print the result.
Here, x will be the current object for which the key value being determined. We get the value of rank attribute from it, replace , with empty string and then converted that to a number with int.
If you don't want to replace , and handle it properly, then you can use locale module's atoi function, like this
>>> import locale
>>> pprint(sorted(ranks, key=lambda x: int(locale.atoi(x["rank"]))))
[{'rank': '1', 'url': 'google.com'},
{'rank': '2', 'url': 'facebook.com'},
{'rank': '11,279', 'url': 'example.com'}]
|
Plot a (polar) color wheel based on a colormap using Python/Matplotlib
|
I am trying to create a color wheel in Python, preferably using Matplotlib. The following works OK:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
xval = np.arange(0, 2*pi, 0.01)
yval = np.ones_like(xval)
colormap = plt.get_cmap('hsv')
norm = mpl.colors.Normalize(0.0, 2*np.pi)
ax = plt.subplot(1, 1, 1, polar=True)
ax.scatter(xval, yval, c=xval, s=300, cmap=colormap, norm=norm, linewidths=0)
ax.set_yticks([])
However, this attempt has two serious drawbacks.
First, when saving the resulting figure as a vector (figure_1.svg), the color wheel consists (as expected) of 621 different shapes, corresponding to the different (x,y) values being plotted. Although the result looks like a circle, it isn't really. I would greatly prefer to use an actual circle, defined by a few path points and Bezier curves between them, as in e.g. matplotlib.patches.Circle. This seems to me the 'proper' way of doing it, and the result would look nicer (no banding, better gradient, better anti-aliasing).
Second (relatedly), the final plotted markers (the last few before 2*pi) overlap the first few. It's very hard to see in the pixel rendering, but if you zoom in on the vector-based rendering you can clearly see the last disc overlap the first few.
I tried using different markers (. or |), but none of them go around the second issue.
Bottom line: can I draw a circle in Python/Matplotlib which is defined in the proper vector/Bezier curve way, and which has an edge color defined according to a colormap (or, failing that, an arbitrary color gradient)?
|
One way I have found is to produce a colormap and then project it onto a polar axis. Here is a working example - it includes a nasty hack, though (clearly commented). I'm sure there's a way to either adjust limits or (harder) write your own Transform to get around it, but I haven't quite managed that yet. I thought the bounds on the call to Normalize would do that, but apparently not.
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import cm
import matplotlib as mpl
fig = plt.figure()
display_axes = fig.add_axes([0.1,0.1,0.8,0.8], projection='polar')
display_axes._direction = 2*np.pi ## This is a nasty hack - using the hidden field to
## multiply the values such that 1 become 2*pi
## this field is supposed to take values 1 or -1 only!!
norm = mpl.colors.Normalize(0.0, 2*np.pi)
# Plot the colorbar onto the polar axis
# note - use orientation horizontal so that the gradient goes around
# the wheel rather than centre out
quant_steps = 2056
cb = mpl.colorbar.ColorbarBase(display_axes, cmap=cm.get_cmap('hsv',quant_steps),
norm=norm,
orientation='horizontal')
# aesthetics - get rid of border and axis labels
cb.outline.set_visible(False)
display_axes.set_axis_off()
plt.show() # Replace with plt.savefig if you want to save a file
This produces
If you want a ring rather than a wheel, use this before plt.show() or plt.savefig
display_axes.set_rlim([-1,1])
This gives
As per @EelkeSpaak in comments - if you save the graphic as an SVG as per the OP, here is a tip for working with the resulting graphic: The little elements of the resulting SVG image are touching and non-overlapping. This leads to faint grey lines in some renderers (Inkscape, Adobe Reader, probably not in print). A simple solution to this is to apply a small (e.g. 120%) scaling to each of the individual gradient elements, using e.g. Inkscape or Illustrator. Note you'll have to apply the transform to each element separately (the mentioned software provides functionality to do this automatically), rather than to the whole drawing, otherwise it has no effect.
|
Localhost Endpoint to DynamoDB Local with Boto3
|
Although Amazon provides documentation regarding how to connect to dynamoDB local with Java, PHP and .Net, there is no description of how to connect to localhost:8000 using Python. Existing documentation on the web points to the use of the DynamoDBConnection method inside boto.dynamodb2.layer1, but this creates an incompatibility between live and test environments that use the boto3 protocol to manage dynamoDB.
In boto3, you can make a request to dynamo using the following constructor and variables set into the environment:
client = boto3.client('dynamodb')
table = client.list_tables()
Whereas the boto.dynamodb2.layer1 package requires you to construct the following:
client = DynamoDBConnection(
host='localhost',
port=8000,
aws_access_key_id='anything',
aws_secret_access_key='anything',
is_secure=False)
table = client.list_tables()
Although it is possible to create logic which determines the proper constructor based upon the local environment, I am wary of building a set of methods which treat each constructor as the same. Instead, I would prefer to use boto3 for everything and to be able to set the endpoint for dynamoDB in the environmental variables. Unfortunately, that option does not appear to be currently be available.
Is there any way to use boto3 to define a dynamoDB local endpoint (like the other languages)? Or any chance that Amazon will plan to support this feature?
|
It does support DynamoDB Local. You just need to set the appropriate endpoint such as you can do with other language SDKs
Here is a code snippet of how you can use boto3's client and resource interface via DynamoDB Local:
import boto3
# For a Boto3 client.
ddb = boto3.client('dynamodb', endpoint_url='http://localhost:8000')
response = ddb.list_tables()
print(response)
# For a Boto3 service resource
ddb = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
print(list(ddb.tables.all()))
|
How to limit python traceback to specific files
|
I write a lot of Python code that uses external libraries. Frequently I will write a bug, and when I run the code I get a big long traceback in the Python console. 99.999999% of the time it's due to a coding error in my code, not because of a bug in the package. But the traceback goes all the way to the line of error in the package code, and either it takes a lot of scrolling through the traceback to find the code I wrote, or the traceback is so deep into the package that my own code doesn't even appear in the traceback.
Is there a way to "black-box" the package code, or somehow only show traceback lines from my code? I'd like the ability to specify to the system which directories or files I want to see traceback from.
|
In order to print your own stacktrace, you would need to handle all unhandled exceptions yourself; this is how the sys.excepthook becomes handy.
The signature for this function is sys.excepthook(type, value, traceback) and its job is:
This function prints out a given traceback and exception to sys.stderr.
So as long as you can play with the traceback and only extract the portion you care about you should be fine. Testing frameworks do that very frequently; they have custom assert functions which usually does not appear in the traceback, in other words they skip the frames that belong to the test framework. Also, in those cases, the tests usually are started by the test framework as well.
You end up with a traceback that looks like this:
[ custom assert code ] + ... [ code under test ] ... + [ test runner code ]
How to identify your code.
You can add a global to your code:
__mycode = True
Then to identify the frames:
def is_mycode(tb):
globals = tb.tb_frame.f_globals
return globals.has_key('__mycode')
How to extract your frames.
skip the frames that don't matter to you (e.g. custom assert code)
identify how many frames are part of your code -> length
extract length frames
def mycode_traceback_levels(tb):
length = 0
while tb and is_mycode(tb):
tb = tb.tb_next
length += 1
return length
Example handler.
def handle_exception(type, value, tb):
# 1. skip custom assert code, e.g.
# while tb and is_custom_assert_code(tb):
# tb = tb.tb_next
# 2. only display your code
length = mycode_traceback_levels(tb)
print ''.join(traceback.format_exception(type, value, tb, length))
install the handler:
sys.excepthook = handle_exception
What next?
You could adjust length to add one or more levels if you still want some info about where the failure is outside of your own code.
see also https://gist.github.com/dnozay/b599a96dc2d8c69b84c6
|
Django and Dropzone.js
|
When I upload files with dropzone it adds them to the database, but they don't have a file, just an ID and creation date. I think the view is the problem but I've tried tons of stuff and I can't figure it out. See my edit below for a more detailed account.
Here is the view
@login_required(login_url='/dashboard-login/')
def dashboard(request):
current_user = request.user
current_client = request.user.client
files = ClientUpload.objects.filter(client=current_client)
form = UploadFileForm()
if request.method == 'POST':
if request.FILES is None:
logger = logging.getLogger(__name__)
logger.warning("No files were attached to the upload.")
return HttpResponseBadRequest('No Files Attached.')
if form.is_valid():
upload = form.save()
form = UploadFileForm(request.POST, request.FILES)
else:
uploaded_files = [request.FILES.get('file_upload[%d]' % i)
for i in range(0, len(request.FILES))]
for f in uploaded_files:
client_upload = ClientUpload.objects.create(client=current_client, file_upload=f)
#for key in request.FILES:
# cupload = ClientUpload.objects.create(client=current_client, file_upload=request.FILES[key])
logger = logging.getLogger(__name__)
logger.debug(request.FILES)
logger.info("File(s) uploaded from " + current_client.company)
return HttpResponseRedirect(reverse('dashboard'))
data = {'form': form, 'client': current_client, 'files': files}
return render_to_response('dashboard.html', data, context_instance=RequestContext(request))
Here are my dz options:
url: '127.0.0.1:8003/dashboard/',
method: "post",
withCredentials: false,
parallelUploads: 12,
uploadMultiple: true,
maxFilesize: 256*4*2,
paramName: "file_upload",
createImageThumbnails: true,
maxThumbnailFilesize: 20,
thumbnailWidth: 100,
thumbnailHeight: 100,
maxFiles: 12,
params: {},
clickable: true,
ignoreHiddenFiles: true,
acceptedFiles: null,
acceptedMimeTypes: null,
autoProcessQueue: false,
addRemoveLinks: true,
previewsContainer: null,
dictDefaultMessage: "Drop files here to upload",
dictFallbackMessage: "Your browser does not support drag and drop file uploads.",
dictFallbackText: "Please use the fallback form below to upload your files.",
dictFileTooBig: "File is too big ({{filesize}}MB). Max filesize: {{maxFilesize}}MB.",
dictInvalidFileType: "You can't upload files of this type.",
dictResponseError: "Server responded with {{statusCode}} code.",
dictCancelUpload: "Cancel upload",
dictCancelUploadConfirmation: "Are you sure you want to cancel this upload?",
dictRemoveFile: "Remove",
dictRemoveFileConfirmation: null,
dictMaxFilesExceeded: "You can only upload {{maxFiles}} files.",
And here is the template:
{% load i18n %}
{% load staticfiles %}
{% load crispy_forms_tags %}
<link href="{% static 'css/dropzone2.css' %}" type="text/css" rel="stylesheet"/>
<form class="dropzone" id="myDropzone" method="post" action="{% url 'dashboard' %}" enctype="multipart/form-data">
{% csrf_token %}
<div class="fallback">
<input name="file" type="file" multiple />
</div>
</form>
<button class="upload-control btn-success btn" type="submit" id='submit-all' onclick="document.getElementById('myDropzone').submit()">
<i class="glyphicon glyphicon-upload"></i>
<span>{% trans 'Submit' %}</span>
</button>
<style>
.upload-control {
margin-top: 10px;
margin-bottom: 0px;
}
</style>
<script src="{% static 'js/dropzone.js' %}"></script>
<script src="{% static 'js/jquery-2.1.4.min.js' %}"></script>
<script type="text/javascript">
Dropzone.autoDiscover = false
$(document).ready(function() {
Dropzone.options.myDropzone = {
init : function() {
var submitButton = document.querySelector("#submit-all")
myDropzone = this;
submitButton.addEventListener("click", function(e) {
e.stopPropagation();
e.preventDefault();
myDropzone.processQueue();
});
this.on("sendingmultiple", function() {
// Figure out what I want here or if I want at all
});
this.on("successmultiple", function(files, response) {
window.location.reload();
});
this.on("errormultiple", function(files, response) {
// Figure out what I want here or if I want at all
});
}
// Do I need this?
//myDropzone.on('success', myDropzone.processQueue.bind(myDropzone));
};
});
</script>
EDIT:
It works now after adding http:// to the url setting. But when I upload a file it is added to the database, but the file field is blank. The multivaluedict shows the file when I print it out, but when it is saved to the database the file field has nothing in it.
When I upload one file I get this in request.FILES:
<MultiValueDict: {u'file_upload[]': [<InMemoryUploadedFile: normal.PNG (image/png)>]}>
When I upload two I get this in request.FILES:
<MultiValueDict: {u'file_upload[]': [<TemporaryUploadedFile: normal.PNG (image/png)>]}>
Despite being two files it only shows the one, but adds them both to the database (both without files and just ID and creation date). Also what is TemporaryUploadedFile and InMemoryUploadedFile?
It should have indexes in the u'file_upload[]' when I upload more than one but it doesn't. I have the settings correct for uploading multiples.
But I can't seem to get them out of the MultiValueDict. And when I try something like:
for upload in request.FILES:
client_upload = ClientUpload.objects.create(client=current_client, file_upload=upload)
I run into that problem where the admin panel shows an ID and time but no file. It happens when uploading one or more. I'm not sure what the difference is between InMemoryUploadedfile and TemporaryUploadedFile either. How can I extract the files from the MultiValueDict? get() is not working, with the list comp I just get an empty list.
The other odd thing, is when I upload certain files the MultiValueDict is empty, and with others it is not. Also it seems that my view gets called more than once (according to the log outputs) and that is normal, except it should be a post then redirect to a get, but it seems to have more than one post request. I checked the dev tools in chrome and I only see one, but oddly it outputs my log statement twice for every time I submit. I know the issue is probably in my view but I've tried a ton of stuff and can't figure out what is wrong.
Anybody have any ideas?
|
I'm working with Dropzone and Django myself for creating Image objects for each file uploaded, which seems to be akin to what you want to do. I'd like to point out some things that I've experienced and show you how I'm doing it to see if that helps.
What you need
The things that you need in order to create a record in the Database for files uploaded with Dropzone is:
The Dropzone HTML form
The Javascript initialization of Dropzone.
A Django View to handle the uploaded files.
I don't understand what you're doing with the Form (is it just validating?) but it seems to be unnecessary. You don't need it (and don't use it) to actually save the file.
Accessing the uploaded files
First lets talk about how to access the files in request.FILES. By setting uploadMultiple: true on your Dropzone configuration you condition Dropzone not to send dzfile but to send each file represented as dzfile[%d] (i.e. dzfile[0], dzfile[1], etc).
Even if that was not the case you're using request.FILES like if it was a list (for f in request.FILES), but like you point out it's actually a dict.
Here's what Python shows when I print request.FILES:
<MultiValueDict: {u'dzfile[1]': [<InMemoryUploadedFile: image2.jpg (image/jpeg)>], u'dzfile[2]': [<InMemoryUploadedFile: image3.jpg (image/jpeg)>], u'dzfile[0]': [<InMemoryUploadedFile: image1.jpg (image/jpeg)>]}>
To access the actual files you need to get each key by it's name.
files = [request.FILES.get('dzfile[%d]' % i)
for i in range(0, len(request.FILES))]
NOW you have the file list you wanted. Simply iterate through it and create your objects however you want. I'm not sure on how your Models work so I'm going to approximate.
for f in files:
# Create a ClientUpload object by setting its FK to client and
# FileField to the file. Correct me if I deduced the models incorrectly
client_upload = ClientUpload.objects.create(
client=current_client,
file_upload=f,
)
That should be enough to create the objects that you want.
Dropzone Javascript
It seems that in the Click event listener you add to the submit button you have to add
e.preventDefault();
e.stopPropagation();
before calling processQueue() to avoid a double form submission.
As to the sendingmultiple, successmultiple and errormultiple, what do you want to happen there? The comments are just there to indicate when those events are trigered.
I personally use:
this.on('sendingmultiple', function () {
// `sendingmultiple` to hide the submit button
$('#my-dropzone').find('button[type=submit]').hide();
});
this.on('successmultiple', function (files, response) {
// `successmultiple` to reload the page (and show the updated info)
window.location.reload();
});
this.on('errormultiple', function (files, response) {
// `errormultiple` to un-hide the button
$('#my-dropzone').find('button[type=submit]').show();
});
But of course you can do what you want.
And finally, what do you intend to happen with that last line in the <script> tag? I don't quite understand it, it looks like if you wanted to re-process the queue on success. It seems not to belong there.
Comment if anything's off, but this setup works fine for me.
|
Start IPython notebook server without running web browser?
|
I would like to use Emacs as main editor for ipython notebooks (with package ein). I want to ask you if there is a way to run the server without the need to open a web browser.
|
Is this what you want?
$ ipython notebook --no-browser
|
Python string formatting with percent sign
|
I am trying to do exactly the following:
>>> x = (1,2)
>>> y = 'hello'
>>> '%d,%d,%s' % (x[0], x[1], y)
'1,2,hello'
However, I have a long x, more than two items, so I tried:
>>> '%d,%d,%s' % (*x, y)
but it is syntax error. What would be the proper way of doing this without indexing like the first example?
|
str % .. accepts a tuple as a right-hand operand, so you can do the following:
>>> x = (1, 2)
>>> y = 'hello'
>>> '%d,%d,%s' % (x + (y,)) # Building a tuple of `(1, 2, 'hello')`
'1,2,hello'
Your try should work in Python 3. where Additional Unpacking Generalizations is supported, but not in Python 2.x:
>>> '%d,%d,%s' % (*x, y)
'1,2,hello'
|
IPython notebook won't read the configuration file
|
I used the following command to initialize a profile:
ipython profile create myserver
Added thses lines to ~/.ipython/profile_myserver/ipython_notebook_config.py:
c = get_config()
c.NotebookApp.ip = '*'
c.NotebookApp.port = 8889
Tried starting the notebook with:
ipython notebook --profile=myserver --debug
It does not read the config file at all.
This is the log output:
[W 16:26:56.607 NotebookApp] Unrecognized alias: '--profile=myserver', it will probably have no effect.
[D 16:26:56.609 NotebookApp] Config changed:
[D 16:26:56.609 NotebookApp] {'profile': u'myserver', 'NotebookApp': {'log_level': 10}}
...
[I 16:26:56.665 NotebookApp] 0 active kernels
[I 16:26:56.665 NotebookApp] The IPython Notebook is running at: http://localhost:8888/
Since I've explicitly specified port 8889 and it still runs on 8888, it clearly ignores the config file. What am I missing?
|
IPython has now moved to version 4.0, which means that if you are using it, it will be reading its configuration from ~/.jupyter, not ~/.ipython. You have to create a new configuration file with
jupyter notebook --generate-config
and then edit the resulting ~/.jupyter/jupyter_notebook_config.py file according to your needs.
More installation instructions here.
|
Open S3 object as a string with Boto3
|
I'm aware that with Boto 2 it's possible to open an S3 object as a string with:
get_contents_as_string()
http://boto.readthedocs.org/en/latest/ref/file.html?highlight=contents%20string#boto.file.key.Key.get_contents_as_string
Is there an equivalent function in boto3 ?
|
This isn't in the boto3 documentation. This worked for me:
object.get()["Body"].read()
object being an s3 object: http://boto3.readthedocs.org/en/latest/reference/services/s3.html#object
|
numpy array, difference between a /= x vs. a = a / x
|
I'm using python 2.7.3, when I execute the following piece of code:
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
a = a / float(2**16 - 1)
print a
This will result in he following output:
>> array([[1.52590219e-05, 3.05180438e-05, 4.57770657e-05],
>> [6.10360876e-05, 7.62951095e-05, 9.15541314e-05]])
Exactly as expected, however when I execute the following piece of code:
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
a /= float(2**16 - 1)
print a
I get the following output:
>> array([[0, 0, 0],
>> [0, 0, 0]])
I expected the same output as in the previous example, I don't understand the different ouput, which seems to be a result of using a /= float(2**16 - 1) vs a = a / float(2**16 - 1).
|
From the documentation:
Warning:
In place operations will perform the calculation using the precision decided by the data type of the two operands, but will silently downcast the result (if necessary) so it can fit back into the array. Therefore, for mixed precision calculations, A {op}= B can be different than A = A {op} B. For example, suppose a = ones((3,3)). Then, a += 3j is different than a = a + 3j: while they both perform the same computation, a += 3 casts the result to fit back in a, whereas a = a + 3j re-binds the name a to the result.
Since your array was an array of integers, when using the in-place operations, the result will be downcasted to integers again.
If you change your array so it stores floats originally, then the results (which are floats) can be stored in the original array, and your code will work fine:
>>> a = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
>>> a /= float(2**16 - 1)
>>> a
array([[ 1.52590219e-05, 3.05180438e-05, 4.57770657e-05],
[ 6.10360876e-05, 7.62951095e-05, 9.15541314e-05]])
|
Replace single quotes with double with exclusion of some elements
|
I want to replace all single quotes in the string with double with the exception of occurrences such as "n't", "'ll", "'m" etc.
input="the stackoverflow don\'t said, \'hey what\'"
output="the stackoverflow don\'t said, \"hey what\""
Code 1:(@https://stackoverflow.com/users/918959/antti-haapala)
def convert_regex(text):
return re.sub(r"(?<!\w)'(?!\w)|(?<!\w)'(?=\w)|(?<=\w)'(?!\w)", '"', text)
There are 3 cases: ' is NOT preceded and is NOT followed by a alphanumeric character; or is not preceded, but followed by an alphanumeric character; or is preceded and not followed by an alphanumeric character.
Issue: That doesn't work on words that end in an apostrophe, i.e.
most possessive plurals, and it also doesn't work on informal
abbreviations that start with an apostrophe.
Code 2:(@https://stackoverflow.com/users/953482/kevin)
def convert_text_func(s):
c = "_" #placeholder character. Must NOT appear in the string.
assert c not in s
protected = {word: word.replace("'", c) for word in ["don't", "it'll", "I'm"]}
for k,v in protected.iteritems():
s = s.replace(k,v)
s = s.replace("'", '"')
for k,v in protected.iteritems():
s = s.replace(v,k)
return s
Too large set of words to specify, as how can one specify persons' etc.
Please help.
Edit 1:
I am using @anubhava's brillant answer. I am facing this issue. Sometimes, there language translations which the approach fail.
Code=
text=re.sub(r"(?<!s)'(?!(?:t|ll|e?m|s|d|ve|re|clock)\b)", '"', text)
Problem:
In text, 'Kumbh melas' melas is a Hindi to English translation not plural possessive nouns.
Input="Similar to the 'Kumbh melas', celebrated by the banks of the holy rivers of India,"
Output=Similar to the "Kumbh melas', celebrated by the banks of the holy rivers of India,
Expected Output=Similar to the "Kumbh melas", celebrated by the banks of the holy rivers of India,
I am looking maybe to add a condition that somehow fixes it. Human-level intervention is the last option.
Edit 2:
Naive and long approach to fix:
def replace_translations(text):
d = enchant.Dict("en_US")
words=tokenize_words(text)
punctuations=[x for x in string.punctuation]
for i,word in enumerate(words):
print i,word
if(i!=len(words) and word not in punctuations and d.check(word)==False and words[i+1]=="'"):
text=text.replace(words[i]+words[i+1],words[i]+"\"")
return text
Are there any corner cases I am missing or are there any better approaches?
|
First attempt
You can also use this regex:
(?:(?<!\w)'((?:.|\n)+?'?)'(?!\w))
DEMO IN REGEX101
This regex match whole sentence/word with both quoting marks, from beginning and end, but also campure the content of quotation inside group nr 1, so you can replace matched part with "\1".
(?<!\w) - negative lookbehind for non-word character, to exclude words like: "you'll", etc., but to allow the regex to match quatations after characters like \n,:,;,. or -,etc. The assumption that there will always be a whitespace before quotation is risky.
' - single quoting mark,
(?:.|\n)+?'?) - non capturing group: one or more of any character or
new line (to match multiline sentences) with lazy quantifire (to avoid
matching from first to last single quoting mark), followed by
optional single quoting sing, if there would be two in a row
'(?!\w) - single quotes, followed by non-word character, to exclude
text like "i'm", "you're" etc. where quoting mark is beetwen words,
The s' case
However it still has problem with matching sentences with apostrophes occurs after word ending with s, like: 'the classes' hours'. I think it is impossible to distinguish with regex when s followed by ' should be treated as end of quotation, or as or s with apostrophes. But I figured out a kind of limited work around for this problem, with regex:
(?:(?<!\w)'((?:.|\n)+?'?)(?:(?<!s)'(?!\w)|(?<=s)'(?!([^']|\w'\w)+'(?!\w))))
DEMO IN REGEX101
PYTHON IMPLEMENTATION
with additional alternative for cases with s': (?<!s)'(?!\w)|(?<=s)'(?!([^']|\w'\w)+'(?!\w) where:
(?<!s)'(?!\w) - if there is no s before ', match as regex above (first attempt),
(?<=s)'(?!([^']|\w'\w)+'(?!\w) - if there is s before ', end a match on this ' only if there is no other ' followed by non-word
character in following text, before end or before another ' (but only ' preceded by letter other than s, or opening of next quotaion). The \w'\w is to include in such match a ' wich are between letters, like in i'm, etc.
this regex should match wrong only it there is couple s' cases in a row. Still, it is far from perfect solution.
Flaws of \w
Also, using \w there is always chance that ' would occur after sybol or non-[a-zA-Z_0-9] but still letter character, like some local language character, and then it will be treated as beginning of a quatation. It could be avoided by replacing (?<!\w) and (?!\w) with (?<!\p{L}) and (?!\p{L}) or something like (?<=^|[,.?!)\s]), etc., positive lookaround for characters wich can occour in sentence before quatation. However a list could be quite long.
|
Spline with constraints at border
|
I have measured data on a three dimensional grid, e.g. f(x, y, t). I want to interpolate and smooth this data in the direction of t with splines.
Currently, I do this with scipy.interpolate.UnivariateSpline:
import numpy as np
from scipy.interpolate import UnivariateSpline
# data is my measured data
# data.shape is (len(y), len(x), len(t))
data = np.arange(1000).reshape((5, 5, 40)) # just for demonstration
times = np.arange(data.shape[-1])
y = 3
x = 3
sp = UnivariateSpline(times, data[y, x], k=3, s=6)
However, I need the spline to have vanishing derivatives at t=0. Is there a way to enforce this constraint?
|
The best thing I can think of is to do a minimization with a constraint with scipy.optimize.minimize. It is pretty easy to take the derivative of a spline, so the constraint is simply. I would use a regular spline fit (UnivariateSpline) to get the knots (t), and hold the knots fixed (and degree k, of course), and vary the coefficients c. Maybe there is a way to vary the knot locations as well but I will leave that to you.
import numpy as np
from scipy.interpolate import UnivariateSpline, splev, splrep
from scipy.optimize import minimize
def guess(x, y, k, s, w=None):
"""Do an ordinary spline fit to provide knots"""
return splrep(x, y, w, k=k, s=s)
def err(c, x, y, t, k, w=None):
"""The error function to minimize"""
diff = y - splev(x, (t, c, k))
if w is None:
diff = np.einsum('...i,...i', diff, diff)
else:
diff = np.dot(diff*diff, w)
return np.abs(diff)
def spline_neumann(x, y, k=3, s=0, w=None):
t, c0, k = guess(x, y, k, s, w=w)
x0 = x[0] # point at which zero slope is required
con = {'type': 'eq',
'fun': lambda c: splev(x0, (t, c, k), der=1),
#'jac': lambda c: splev(x0, (t, c, k), der=2) # doesn't help, dunno why
}
opt = minimize(err, c0, (x, y, t, k, w), constraints=con)
copt = opt.x
return UnivariateSpline._from_tck((t, copt, k))
And then we generate some fake data that should have zero initial slope and test it:
import matplotlib.pyplot as plt
n = 10
x = np.linspace(0, 2*np.pi, n)
y0 = np.cos(x) # zero initial slope
std = 0.5
noise = np.random.normal(0, std, len(x))
y = y0 + noise
k = 3
sp0 = UnivariateSpline(x, y, k=k, s=n*std)
sp = spline_neumann(x, y, k, s=n*std)
plt.figure()
X = np.linspace(x.min(), x.max(), len(x)*10)
plt.plot(X, sp0(X), '-r', lw=1, label='guess')
plt.plot(X, sp(X), '-r', lw=2, label='spline')
plt.plot(X, sp.derivative()(X), '-g', label='slope')
plt.plot(x, y, 'ok', label='data')
plt.legend(loc='best')
plt.show()
|
Exiting Python Debugger ipdb
|
I use ipdb fairly often in a way to just jump to a piece of code that is isolated i.e. it is hard to write a real script that uses it. Instead I write a minimal test case with mocking and jump into it.
Exemplary for the workflow:
def func():
...
import ipdb
ipdb.set_trace()
...
def test_case():
...
func()
...
Then, invoke
py.test test_file.py -s -k test_case
Now, usually I just check one variable or two, and then want to quit. Change the code and do it over again.
How do I quit? The manual says q quits the debugger. It doesn't (really). You have to quit a few times before the debugger actually terminates. The same behavior for Ctrl-C and Ctrl-D (with the additional frustration that hitting Ctrl-D several times eventually quits the terminal, too).
Is there a smart way to force quit? Is this workflow even sensible? What is the standard way to do it?
|
I put the following in my .pdbrc
import os
alias kk os.system('kill -9 %d' % os.getpid())
kk kills the debugger and (the process that trigger the debugger).
|
Compact way of writing (a + b == c or a + c == b or b + c == a)
|
Is there a more compact or pythonic way to write the boolean expression
a + b == c or a + c == b or b + c == a
I came up with
a + b + c in (2*a, 2*b, 2*c)
but that is a little strange.
|
If we look at the Zen of Python, emphasis mine:
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
The most Pythonic solution is the one that is clearest, simplest, and easiest to explain:
a + b == c or a + c == b or b + c == a
Even better, you don't even need to know Python to understand this code! It's that easy. This is, without reservation, the best solution. Anything else is intellectual masturbation.
Furthermore, this is likely the best performing solution as well, as it is the only one out of all the proposals that short circuits. If a + b == c, only a single addition and comparison is done.
|
Can't install virtualenvwrapper on OSX 10.11 El Capitan
|
I recently wiped my Mac and reinstalled OSX El Capitan public beta 3. I installed pip with sudo easy_install pip and installed virtualenv with sudo pip install virtualenv and did not have any problems.
Now, when I try to sudo pip install virtualenvwrapper, I get the following:
Users-Air:~ User$ sudo pip install virtualenvwrapper
The directory '/Users/User/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/User/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting virtualenvwrapper
Downloading virtualenvwrapper-4.6.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /Library/Python/2.7/site-packages (from virtualenvwrapper)
Requirement already satisfied (use --upgrade to upgrade): virtualenv-clone in /Library/Python/2.7/site-packages (from virtualenvwrapper)
Collecting stevedore (from virtualenvwrapper)
Downloading stevedore-1.7.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): pbr<2.0,>=1.3 in /Library/Python/2.7/site-packages (from stevedore->virtualenvwrapper)
Requirement already satisfied (use --upgrade to upgrade): argparse in /Library/Python/2.7/site-packages (from stevedore->virtualenvwrapper)
Collecting six>=1.9.0 (from stevedore->virtualenvwrapper)
Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six, stevedore, virtualenvwrapper
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-tTNnKQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
As the issue seems to be with the six package, manually trying to uninstall it with sudo pip uninstall six results in the same error. The output suggests using the -H flag as well, but I still get pretty much the same error:
Users-Air:~ User$ sudo -H pip install virtualenvwrapper
Collecting virtualenvwrapper
Downloading virtualenvwrapper-4.6.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /Library/Python/2.7/site-packages (from virtualenvwrapper)
Requirement already satisfied (use --upgrade to upgrade): virtualenv-clone in /Library/Python/2.7/site-packages (from virtualenvwrapper)
Collecting stevedore (from virtualenvwrapper)
Downloading stevedore-1.7.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): pbr<2.0,>=1.3 in /Library/Python/2.7/site-packages (from stevedore->virtualenvwrapper)
Requirement already satisfied (use --upgrade to upgrade): argparse in /Library/Python/2.7/site-packages (from stevedore->virtualenvwrapper)
Collecting six>=1.9.0 (from stevedore->virtualenvwrapper)
Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six, stevedore, virtualenvwrapper
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-fwQzor-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
I have disabled rootless with sudo nvram boot-args="rootless=0", and this has had no effect. Any help would be appreciated!
|
You can manually install the dependencies that don't exist on a stock 10.11 install, then install the other packages with --no-deps to ignore the dependencies. That way it will skip six (and argparse which is also already installed). This works on my 10.11 beta 6 install:
sudo pip install pbr
sudo pip install --no-deps stevedore
sudo pip install --no-deps virtualenvwrapper
And no need to disable rootless.
|
Python Gaussian Kernel density calculate score for new values
|
this is my code:
import numpy as np
from scipy.stats.kde import gaussian_kde
from scipy.stats import norm
from numpy import linspace,hstack
from pylab import plot,show,hist
import re
import json
attribute_file="path"
attribute_values = [line.rstrip('\n') for line in open(attribute_file)]
obs=[]
#Assume the list obs as loaded
obs=np.asarray(osservazioni)
obs=np.sort(obs,kind='mergesort')
x_min=osservazioni[0]
x_max=osservazioni[len(obs)-1]
# obtaining the pdf (my_pdf is a function!)
my_pdf = gaussian_kde(obs)
# plotting the result
x = linspace(0,x_max,1000)
plot(x,my_pdf(x),'r') # distribution function
hist(obs,normed=1,alpha=.3) # histogram
show()
new_values = np.asarray([-1, 0, 2, 3, 4, 5, 768])[:, np.newaxis]
for e in new_values:
print (str(e)+" - "+str(my_pdf(e)*100*2))
Problem:
The obs array contains a list of all obs.
I need to calcolate a score (between 0 and 1) for new values
[-1, 0, 2, 3, 4, 500, 768]
So the value -1 must have a discrete score because it doesn't appears in the distribution but is next to the 1 value that is very common in the observations.
|
The reason for that is that you have many more 1's in your observations than 768's. So even if -1 is not exactly 1, it gets a high predicted value, because the histogram has a much larger larger value at 1 than at 768.
Up to a multiplicative constant, the formula for prediction is:
where K is your kernel, D your observations and h your bandwitdh. Looking at the doc for gaussian_kde, we see that if no value is provided for bw_method, it is estimated in some way, which here doesn't suit you.
So you can try some different values: the larger the bandwidth, the more points far from your new data are taken into account, the limit case being an almost constant predicted function.
On the other hand, a very small bandwidth only takes really close points into account, which is what I thing you want.
Some graphs to illustrate the influence of the bandwidth:
Code used:
import matplotlib.pyplot as plt
f, axarr = plt.subplots(2, 2, figsize=(10, 10))
for i, h in enumerate([0.01, 0.1, 1, 5]):
my_pdf = gaussian_kde(osservazioni, h)
axarr[i//2, i%2].plot(x, my_pdf(x), 'r') # distribution function
axarr[i//2, i%2].set_title("Bandwidth: {0}".format(h))
axarr[i//2, i%2].hist(osservazioni, normed=1, alpha=.3) # histogram
With your current code, for x=-1, the value of K((x-x_i)/h) for all x_i's who are equal to 1 is smaller than 1, but you add up a lot of these values (there are 921 1s in your observations, and also 357 2s)
On the other hand for x = 768, the value of the kernel is 1 for all x_i's which are 768, but there are not many such points (39 to be precise). So here a lot of "small" terms make a larger sum than a small number of larger terms.
If you don't want this behavior, you can decrease the size of your gaussian kernel : this way the penalty (K(-2)) paid because of the distance between -1 and 1 will be higher. But I think that this would be overfitting your observations.
A formula to determine whether a new sample is acceptable (compared to your empirical distribution) or not is more of a statistical problem, you can have a look at stats.stackexchange.com
You can always try to use a low value for the bandwidth, which will give you a peaked predicted function. Then you can normalize this function, dividing it by its maximal value.
After that, all predicted values will be between 0 and 1:
maxDensityValue = np.max(my_pdf(x))
for e in new_values:
print("{0} {1}".format(e, my_pdf(e)/maxDensityValue))
|
Why is math.floor(x/y) != x // y for two evenly divisible floats in Python?
|
I have been reading about division and integer division in Python and the differences between division in Python2 vs Python3. For the most part it all makes sense. Python 2 uses integer division only when both values are integers. Python 3 always performs true division. Python 2.2+ introduced the // operator for integer division.
Examples other programmers have offered work out nice and neat, such as:
>>> 1.0 // 2.0 # floors result, returns float
0.0
>>> -1 // 2 # negatives are still floored
-1
How is // implemented? Why does the following happen:
>>> import math
>>> x = 0.5
>>> y = 0.1
>>> x / y
5.0
>>> math.floor(x/y)
5.0
>>> x // y
4.0
Shouldn't x // y = math.floor(x/y)? These results were produced on python2.7, but since x and y are both floats the results should be the same on python3+. If there is some floating point error where x/y is actually 4.999999999999999 and math.floor(4.999999999999999) == 4.0 wouldn't that be reflected in x/y?
The following similar cases, however, aren't affected:
>>> (.5*10) // (.1*10)
5.0
>>> .1 // .1
1.0
|
I didn't find the other answers satisfying. Sure, .1 has no finite binary expansion, so our hunch is that representation error is the culprit. But that hunch alone doesn't really explain why math.floor(.5/.1) yields 5.0 while .5 // .1 yields 4.0.
The punchline is that a // b is actually doing floor((a - (a % b))/b), as opposed to simply floor(a/b).
.5 / .1 is exactly 5.0
First of all, note that the result of .5 / .1 is exactly 5.0 in Python. This is the case even though .1 cannot be exactly represented. Take this code, for instance:
from decimal import Decimal
num = Decimal(.5)
den = Decimal(.1)
res = Decimal(.5/.1)
print('num: ', num)
print('den: ', den)
print('res: ', res)
And the corresponding output:
num: 0.5
den: 0.1000000000000000055511151231257827021181583404541015625
res: 5
This shows that .5 can be represented with a finite binary expansion, but .1 cannot. But it also shows that despite this, the result of .5 / .1 is exactly 5.0. This is because floating point division results in the loss of precision, and the amount by which den differs from .1 is lost in the process.
That's why math.floor(.5 / .1) works as you might expect: since .5 / .1 is 5.0, writing math.floor(.5 / .1) is just the same as writing math.floor(5.0).
So why doesn't .5 // .1 result in 5?
One might assume that .5 // .1 is shorthand for floor(.5 / .1), but this is not the case. As it turns out, the semantics differ. This is even though the PEP says:
Floor division will be implemented in all the Python numeric
types, and will have the semantics of
a // b == floor(a/b)
As it turns out, the semantics of .5 // .1 are actually equivalent to:
floor((.5 - mod(.5, .1)) / .1)
where mod is the floating point remainder of .5 / .1 rounded towards zero. This is made clear by reading the Python source code.
This is where the fact that .1 can't be exactly represented by binary expansion causes the problem. The floating point remainder of .5 / .1 is not zero:
>>> .5 % .1
0.09999999999999998
and it makes sense that it isn't. Since the binary expansion of .1 is ever-so-slightly greater than the actual decimal .1, the largest integer alpha such that alpha * .1 <= .5 (in our finite precision math) is alpha = 4. So mod(.5, .1) is nonzero, and is roughly .1. Hence floor((.5 - mod(.5, .1)) / .1) becomes floor((.5 - .1) / .1) becomes floor(.4 / .1) which equals 4.
And that's why .5 // .1 == 4.
Why does // do that?
The behavior of a // b may seem strange, but there's a reason for it's divergence from math.floor(a/b). In his blog on the history of Python, Guido writes:
The integer division operation (//) and its sibling, the modulo
operation (%), go together and satisfy a nice mathematical
relationship (all variables are integers):
a/b = q with remainder r
such that
b*q + r = a and 0 <= r < b
(assuming a and b are >= 0).
Now, Guido assumes that all variables are integers, but that relationship will still hold if a and b are floats, if q = a // b. If q = math.floor(a/b) the relationship won't hold in general. And so // might be preferred because it satisfies this nice mathematical relationship.
|
Python Assignment Operator Precedence - (a, b) = a[b] = {}, 5
|
I saw this Python snippet on Twitter and was quite confused by the output:
>>> a, b = a[b] = {}, 5
>>> a
{5: ({...}, 5)}
What is going on here?
|
From the Assignment statements documentation:
An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right.
You have two assignment target lists; a, b, and a[b], the value {}, 5 is assigned to those two targets from left to right.
First the {}, 5 tuple is unpacked to a, b. You now have a = {} and b = 5. Note that {} is mutable.
Next you assign the same dictionary and integer to a[b], where a evaluates to the dictionary, and b evaluates to 5, so you are setting the key 5 in the dictionary to the tuple ({}, 5) creating a circular reference. The {...} thus refers to the same object that a is already referencing.
Because assignment takes place from left to right, you can break this down to:
a, b = {}, 5
a[b] = a, b
so a[b][0] is the same object as a:
>>> a, b = {}, 5
>>> a[b] = a, b
>>> a
{5: ({...}, 5)}
>>> a[b][0] is a
True
|
yield in list comprehensions and generator expressions
|
The following behaviour seems rather counterintuitive to me (Python 3.4):
>>> [(yield i) for i in range(3)]
<generator object <listcomp> at 0x0245C148>
>>> list([(yield i) for i in range(3)])
[0, 1, 2]
>>> list((yield i) for i in range(3))
[0, None, 1, None, 2, None]
The intermediate values of the last line are actually not always None, they are whatever we send into the generator, equivalent (I guess) to the following generator:
def f():
for i in range(3):
yield (yield i)
It strikes me as funny that those three lines work at all. The Reference says that yield is only allowed in a function definition (though I may be reading it wrong and/or it may simply have been copied from the older version). The first two lines produce a SyntaxError in Python 2.7, but the third line doesn't.
Also, it seems odd
that a list comprehension returns a generator and not a list
and that the generator expression converted to a list and the corresponding list comprehension contain different values.
Could someone provide more information?
|
Generator expressions, and set and dict comprehensions are compiled to (generator) function objects. In Python 3, list comprehensions get the same treatment; they are all, in essence, a new nested scope.
You can see this if you try to disassemble a generator expression:
>>> dis.dis(compile("(i for i in range(3))", '', 'exec'))
1 0 LOAD_CONST 0 (<code object <genexpr> at 0x10f7530c0, file "", line 1>)
3 LOAD_CONST 1 ('<genexpr>')
6 MAKE_FUNCTION 0
9 LOAD_NAME 0 (range)
12 LOAD_CONST 2 (3)
15 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
18 GET_ITER
19 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
22 POP_TOP
23 LOAD_CONST 3 (None)
26 RETURN_VALUE
>>> dis.dis(compile("(i for i in range(3))", '', 'exec').co_consts[0])
1 0 LOAD_FAST 0 (.0)
>> 3 FOR_ITER 11 (to 17)
6 STORE_FAST 1 (i)
9 LOAD_FAST 1 (i)
12 YIELD_VALUE
13 POP_TOP
14 JUMP_ABSOLUTE 3
>> 17 LOAD_CONST 0 (None)
20 RETURN_VALUE
The above shows that a generator expression is compiled to a code object, loaded as a function (MAKE_FUNCTION creates the function object from the code object). The .co_consts[0] reference lets us see the code object generated for the expression, and it uses YIELD_VALUE just like a generator function would.
As such, the yield expression works in that context, as the compiler sees these as functions-in-disguise.
Still, I view this as a bug; yield has no place in these expressions. The Python grammar allows it (which is why the code is compilable), but the yield expression specification shows that using yield here should not actually work:
The yield expression is only used when defining a generator function and thus can only be used in the body of a function definition.
This has already lead to confusing bugs, where someone tried to use yield in a generator function, inside a generator expression, expecting the yield to apply to the function. The Python developers are aware of this issue, with Guido on record stating this is not intended:
I think it is definitely wrong the way it works in 3.x. (Especially since it works as expected in 2.x.)
I agree with Inyeol's preference of fixes: (1) make it work properly for listcomps as well as genexps, (2) if that's not possible, forbid yield in a genexp or listcomp.
The differences between how yield in a list comprehension and yield in a generator expression operate stem from the differences in how these two expressions are implemented. In Python 3 a list comprehension uses LIST_APPEND calls to add the top of the stack to the list being built, while a generator expression instead yields that value. Adding in (yield <expr>) just adds another YIELD_VALUE opcode to either:
>>> dis.dis(compile("[(yield i) for i in range(3)]", '', 'exec').co_consts[0])
1 0 BUILD_LIST 0
3 LOAD_FAST 0 (.0)
>> 6 FOR_ITER 13 (to 22)
9 STORE_FAST 1 (i)
12 LOAD_FAST 1 (i)
15 YIELD_VALUE
16 LIST_APPEND 2
19 JUMP_ABSOLUTE 6
>> 22 RETURN_VALUE
>>> dis.dis(compile("((yield i) for i in range(3))", '', 'exec').co_consts[0])
1 0 LOAD_FAST 0 (.0)
>> 3 FOR_ITER 12 (to 18)
6 STORE_FAST 1 (i)
9 LOAD_FAST 1 (i)
12 YIELD_VALUE
13 YIELD_VALUE
14 POP_TOP
15 JUMP_ABSOLUTE 3
>> 18 LOAD_CONST 0 (None)
21 RETURN_VALUE
The YIELD_VALUE opcode at bytecode indexes 15 and 12 respectively is extra, a cuckoo in the nest. So for the list-comprehension-turned-generator you have 1 yield producing the top of the stack each time (replacing the top of the stack with the yield return value), and for the generator expression variant you yield the top of the stack (the integer) and then yield again, but now the stack contains the return value of the yield and you get None that second time.
For the list comprehension then, the intended list object output is still returned, but Python 3 sees this as a generator so the return value is instead attached to the StopIteration exception as the value attribute:
>>> from itertools import islice
>>> listgen = [(yield i) for i in range(3)]
>>> list(islice(listgen, 3)) # avoid exhausting the generator
[0, 1, 2]
>>> try:
... next(listgen)
... except StopIteration as si:
... print(si.value)
...
[None, None, None]
Those None objects are the return values from the yield expressions.
And to reiterate this again; this same issue applies to dictionary and set comprehension in Python 2 and Python 3 as well; in Python 2 the yield return values are still added to the intended dictionary or set object, and the return value is 'yielded' last instead of attached to the StopIteration exception:
>>> list({(yield k): (yield v) for k, v in {'foo': 'bar', 'spam': 'eggs'}.items()})
['bar', 'foo', 'eggs', 'spam', {None: None}]
>>> list({(yield i) for i in range(3)})
[0, 1, 2, set([None])]
|
pyautogui.locateCenterOnScreen() returns None instead of coordinates
|
import pyautogui
print (pyautogui.locateCenterOnScreen("C:\Users\Venkatesh_J\PycharmProjects\mouse_event\mouse_event.png"))
Instead of returning coordinates, it returns None.
|
Seems like it couldn't find anything matching your image on the screen.
locateCenterOnScreen(image, grayscale=False) - Returns (x, y) coordinates of the center of the first found instance of the image on the screen. Returns None if not found on the screen.
|
Python Multiple Assignment Statements In One Line
|
(Don't worry, this isn't another question about unpacking tuples.)
In python, a statement like foo = bar = baz = 5 assigns the variables foo, bar, and baz to 5. It assigns these variables from left to right, as can be proved by nastier examples like
>>> foo[0] = foo = [0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined
>>> foo = foo[0] = [0]
>>> foo
[[...]]
>>> foo[0]
[[...]]
>>> foo is foo[0]
True
But the python language reference states that assignment statements have the form
(target_list "=")+ (expression_list | yield_expression)
and on assignment the expression_list is evaluated first and then the assigning happens.
So how can the line foo = bar = 5 be valid, given that bar = 5 isn't an expression_list? How are these multiple assignments on one line getting parsed and evaluated? Am I reading the language reference wrong?
|
All credit goes to @MarkDickinson, who answered this in a comment:
Notice the + in (target_list "=")+, which means one or more copies. In foo = bar = 5, there are two (target_list "=") productions, and the expression_list part is just 5
All target_list productions (i.e. things that look like foo =) in an assignment statement get assigned, from left to right, to the expression_list on the right end of the statement, after the expression_list gets evaluated.
And of course the usual 'tuple-unpacking' assignment syntax works within this syntax, letting you do things like
>>> foo, boo, moo = boo[0], moo[0], foo[0] = moo[0], foo[0], boo[0] = [0], [0], [0]
>>> foo
[[[[...]]]]
>>> foo[0] is boo
True
>>> foo[0][0] is moo
True
>>> foo[0][0][0] is foo
True
|
How can I efficiently read and write files that are too large to fit in memory?
|
I am trying to calculate the cosine similarity of 100,000 vectors, and each of these vectors has 200,000 dimensions.
From reading other questions I know that memmap, PyTables and h5py are my best bets for handling this kind of data, and I am currently working with two memmaps; one for reading the vectors, the other for storing the matrix of cosine similarities.
Here is my code:
import numpy as np
import scipy.spatial.distance as dist
xdim = 200000
ydim = 100000
wmat = np.memmap('inputfile', dtype = 'd', mode = 'r', shape = (xdim,ydim))
dmat = np.memmap('outputfile', dtype = 'd', mode = 'readwrite', shape = (ydim,ydim))
for i in np.arange(ydim)):
for j in np.arange(i+1,ydim):
dmat[i,j] = dist.cosine(wmat[:,i],wmat[:,j])
dmat.flush()
Currently, htop reports that I am using 224G of VIRT memory, and 91.2G of RES memory which is climbing steadily. It seems to me as if, by the end of the process, the entire output matrix will be stored in memory, which is something I'm trying to avoid.
QUESTION:
Is this a correct usage of memmaps, am I writing to the output file in a memory efficient manner (by which I mean that only the necessary parts of the in- and output files i.e. dmat[i,j] and wmat[:,i/j], are stored in memory)?
If not, what did I do wrong, and how can I fix this?
Thanks for any advice you may have!
EDIT: I just realized that htop is reporting total system memory usage at 12G, so it seems it is working after all... anyone out there who can enlighten me? RES is now at 111G...
EDIT2: The memmap is created from a 1D array consisting of lots and lots of long decimals quite close to 0, which is shaped to the desired dimensions. The memmap then looks like this.
memmap([[ 9.83721223e-03, 4.42584107e-02, 9.85033578e-03, ...,
-2.30691545e-07, -1.65070799e-07, 5.99395837e-08],
[ 2.96711345e-04, -3.84307391e-04, 4.92968462e-07, ...,
-3.41317722e-08, 1.27959347e-09, 4.46846438e-08],
[ 1.64766260e-03, -1.47337747e-05, 7.43660202e-07, ...,
7.50395136e-08, -2.51943163e-09, 1.25393555e-07],
...,
[ -1.88709000e-04, -4.29454722e-06, 2.39720287e-08, ...,
-1.53058717e-08, 4.48678211e-03, 2.48127260e-07],
[ -3.34207882e-04, -4.60275148e-05, 3.36992876e-07, ...,
-2.30274532e-07, 2.51437794e-09, 1.25837564e-01],
[ 9.24923862e-04, -1.59552854e-03, 2.68354822e-07, ...,
-1.08862665e-05, 1.71283316e-07, 5.66851420e-01]])
|
Memory maps are exactly what the name says: mappings of (virtual) disk sectors into memory pages. The memory is managed by the operating system on demand. If there is enough memory, the system keeps parts of the files in memory, maybe filling up the whole memory, if there is not enough left, the system may discard pages read from file or may swap them into swap space. Normally you can rely on the OS is as efficient as possible.
|
Best practices for turning jupyter notebooks into python scripts
|
Jupyter (iPython) notebook is deservedly known as a good tool for prototyping the code and doing all kinds of machine learning stuff interactively. But when I use it, I inevitably run into the following:
the notebook quickly becomes too complex and messy to be maintained and improved further as notebook, and I have to make python scripts out of it;
when it comes to production code (e.g. one that needs to be re-run every day), the notebook again is not the best format.
Suppose I've developed a whole machine learning pipeline in jupyter that includes fetching raw data from various sources, cleaning the data, feature engineering, and training models after all. Now what's the best logic to make scripts from it with efficient and readable code? I used to tackle it several ways so far:
Simply convert .ipynb to .py and, with only slight changes, hard-code all the pipeline from the notebook into one python script.
'+': quick
'-': dirty, non-flexible, not convenient to maintain
Make a single script with many functions (approximately, 1 function for each one or two cell), trying to comprise the stages of the pipeline with separate functions, and name them accordingly. Then specify all parameters and global constants via argparse.
'+': more flexible usage; more readable code (if you properly transformed the pipeline logic to functions)
'-': oftentimes, the pipeline is NOT splittable into logically completed pieces that could become functions without any quirks in the code. All these functions are typically needed to be only called once in the script rather than to be called many times inside loops, maps etc. Furthermore, each function typically takes the output of all functions called before, so one has to pass many arguments to each function.
The same thing as point (2), but now wrap all the functions inside the class. Now all the global constants, as well as outputs of each method can be stored as class attributes.
'+': you needn't to pass many arguments to each method -- all the previous outputs already stored as attributes
'-': the overall logic of a task is still not captured -- it is data and machine learning pipeline, not just class. The only goal for the class is to be created, call all the methods sequentially one-by-one and then be removed. On top of this, classes are quite long to implement.
Convert a notebook into python module with several scripts. I didn't try this out, but I suspect this is the longest way to deal with the problem.
I suppose, this overall setting is very common among data scientists, but surprisingly I cannot find any useful advice around.
Folks, please, share your ideas and experience. Have you ever encountered this issue? How have you tackled it?
|
We are having the similar issue. However we are using several notebooks for prototyping the outcomes which should become also several python scripts after all.
Our approach is that we put aside the code, which seams to repeat across those notebooks. We put it into the python module, which is imported by each notebook and also used in the production. We iteratively improve this module continuously and add tests of what we find during prototyping.
Notebooks then become rather like the configuration scripts (which we just plainly copy into the end resulting python files) and several prototyping checks and validations, which we do not need in the production.
Most of all we are not afraid of the refactoring :)
|
Finding the "best" combination for a set
|
I have a set, sentences, which contains sentences from the English language in the form of strings. I wish to create a subset of sentences, sentences2, which contains sentences containing only 20 unique words. Of course, there are many, many such subsets, but I'm looking for the "best" one and by "best" I mean that subset where all words have the highest possible representation in sentences2.
The following example, will further clarify what I mean by "best":
If I was to filter sentences for this set of words:
(i,you,do,think,yes,dont,can,it,good,cant,but,am,why,where,now,no,know,here,feel,are)
I would get the following:
sentences2 = set(("where are you now", "here i am", "can you do it", "yes i can", "but can i do it", "no you cant", "do you feel good", "yes i do", "why are you here", "i dont know", "i think i know why", "you dont think", "yes i do", "no you dont", "i dont think you think", "i feel good", "but i am good", "i cant do it now", "yes you can", "but i cant", "where do you think i am"))
and here each word is represented at least twice, as we can see if we use a counter on sentences2:
c = collections.Counter({'i': 13, 'you': 10, 'do': 6, 'think': 5, 'dont': 4, 'can': 4, 'good': 3, 'but': 3, 'am': 3, 'it': 3, 'cant': 3, 'yes': 3, 'know': 2, 'no': 2, 'here': 2, 'why': 2, 'feel': 2, 'are': 2, 'now': 2, 'where': 2})
If each word is represented at least twice we can say that this set of 20 words has a score of 2.
score = min(c.values())
However, the following set:
(i,you,he,do,think,yes,dont,can,it,good,cant,but,am,why,where,now,no,here,she,are)
has a score of 5, since if I use it to filter sentences, I get a sentences2 where each word is represented at least five times.
So I'm after the highest possible score for all possible 20 word combinations.
Here is my attempt at solving this problem:
sentences = ... # all the sentences in my text
common_words = ... # the hundred most common words in the text
result_size = 20
highest_score = 0
for sample in itertools.combinations(common_words, result_size):
sentences2 = list(filter(lambda s: set(s).issubset(sample), sentences))
c = Counter([j for i in sentences2 for j in i])
if len(c.values()) and min(c.values()) > highest_score:
# this is the set with the highest score to date
print(c)
highest_score = min(c.values())
However, this algorithm will take forever to compute, with 5.3598337040381E+20 combinations if I'm not mistaken. Can you suggest how I might go about solving this with a much faster algorithm?
Please note that the resulting set can contain less than 20 words and that this is completely fine. For example, c.values() in my algorithm does not have to match the size of result_size.
Also note that I'm expecting the words in the resulting set to be found in the top one hundred words (common_words contains 100 values). This is also by design.
|
Disclaimer: You have not specified data characteristics, so my answer will assume that it is not too large(more than 1,000,000 sentences, each at most 1,000). Also Description is a bit complicated and I might have not understood the problem fully.
Solution:
Instead of focusing on different combinations, why don't you create a hashMap(dict in python) for your 100 most frequently used words, then traverse the array of senteces and for each word in each sentence, increase its corresponding value(if it is already inside the dict).
In the end, just sort this hashMap according to the number of occurences(value) of each word(key), then use the most frequent 20.
Complexity:
A quick look at the algorithms, gives:
Traversing N sentences, traversing each with M words, increasing the hashMap value. at the end sorting an array of (word, occurrences) pairs. which is negligible(hashMap size is constant, 100 frequently used words), and extracting first 20.
Time Complexity : O(N*M)
Space complexity : O(1) (we don't need to store the sentences, we just have the hashMap)
Sample Code:
Here is a quick psuedo-code:
word_occur_dict = {#initialized with frequent words as keys, and zero as value for all}
for sentence in sentences: #for each sentence
sentence_words = sentence.split(" ") #construct the word list
for word in sentence_words: #for each word
if word in word_occur_dict: #if it is a frequent word, increase value
word_occur_dict[word]++
final_result = sort_dict(word_occur_dict)[:20] #returns list of tuples
Python Code:
import operator
common_words = ["do","think","yes","dont","can","it","good","cant","but","am","why","where","now","no","know","here","feel","are","i","you","he","she"]
common_words_dict = {}
sentences = ["where are you now", "here i am", "can you do it", "yes i can", "but can i do it", "no you cant", "do you feel good", "yes i do", "why are you here", "i dont know", "i think i know why", "you dont think", "yes i do", "no you dont", "i dont think you think", "i feel good", "but i am good", "i cant do it now", "yes you can", "but i cant", "where do you think i am"]
for w in common_words: #initialize the dict
common_words_dict[w] = 0
for sentence in sentences: #for each sentence
sentence_words = sentence.split(" ") #construct the word list
for word in sentence_words: #for each word
if word in common_words_dict: #if it is a frequent word, increase value
common_words_dict[word] = common_words_dict[word]+1
sorted_word_dict = sorted(common_words_dict.items(), key=operator.itemgetter(1))
print sorted_word_dict[::-1][:20]
By the way, 'he' and 'she' does not appear anywhere on the sentences, but you said the following word combination has a score of 5
(i,you,he,do,think,yes,dont,can,it,good,cant,but,am,why,where,now,no,here,she,are)
Have I misunderstood the problem.
Credit where it is due: StackOverflow: Sort a Python dictionary by value
|
Why can a floating point dictionary key overwrite an integer key with the same value?
|
I'm working through http://www.mypythonquiz.com, and question #45 asks for the output of the following code:
confusion = {}
confusion[1] = 1
confusion['1'] = 2
confusion[1.0] = 4
sum = 0
for k in confusion:
sum += confusion[k]
print sum
The output is 6, since the key 1.0 replaces 1. This feels a bit dangerous to me, is this ever a useful language feature?
|
First of all: the behaviour is documented explicitly in the docs for the hash function:
hash(object)
Return the hash value of the object (if it has one). Hash values are
integers. They are used to quickly compare dictionary keys during a
dictionary lookup. Numeric values that compare equal have the same
hash value (even if they are of different types, as is the case for 1
and 1.0).
Secondly, a limitation of hashing is pointed out in the docs for object.__hash__
object.__hash__(self)
Called by built-in function hash() and for operations on members of
hashed collections including set, frozenset, and dict. __hash__()
should return an integer. The only required property is that objects
which compare equal have the same hash value;
This is not unique to python. Java has the same caveat: if you implement hashCode then, in order for things to work correctly, you must implement it in such a way that: x.equals(y) implies x.hashCode() == y.hashCode().
So, python decided that 1.0 == 1 holds, hence it's forced to provide an implementation for hash such that hash(1.0) == hash(1). The side effect is that 1.0 and 1 act exactly in the same way as dict keys, hence the behaviour.
In other words the behaviour in itself doesn't have to be used or useful in any way. It is necessary. Without that behaviour there would be cases where you could accidentally overwrite a different key.
If we had 1.0 == 1 but hash(1.0) != hash(1) we could still have a collision. And if 1.0 and 1 collide, the dict will use equality to be sure whether they are the same key or not and kaboom the value gets overwritten even if you intended them to be different.
The only way to avoid this would be to have 1.0 != 1, so that the dict is able to distinguish between them even in case of collision. But it was deemed more important to have 1.0 == 1 than to avoid the behaviour you are seeing, since you practically never use floats and ints as dictionary keys anyway.
Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. 1/2 -> 0.5) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python.
This behaviour would appear in any implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons.
For example if a dict was implemented using a red-black tree or an other kind of balanced BST, when the key 1.0 is looked up the comparisons with other keys would return the same results as for 1 and so they would still act in the same way.
Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the dict may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly.
Note that there would be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how == compares wouldn't matter when the arguments have different types.
However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key.
If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)).
|
Why does this Jython loop fail after a single run?
|
I've got the following code:
public static String getVersion()
{
PythonInterpreter interpreter = new PythonInterpreter();
try
{
interpreter.exec(IOUtils.toString(new FileReader("./Application Documents/Scripts/Version.py")));
PyObject get_version = interpreter.get("get_latest_version");
PyObject result = get_version.__call__(interpreter.get("url"));
String latestVersion = (String) result.__tojava__(String.class);
interpreter.close();
return latestVersion;
} catch (IOException ex) {
ex.printStackTrace();
interpreter.close();
return Version.getLatestVersionOnSystem();
}
For the sake of completeness, I'm adding the Python code:
import urllib2 as urllib
import warnings
url = 'arcticlights.ca/api/paint&requests?=version'
def get_latest_version(link=url):
request = urllib.Request(link)
handler = urllib.urllopen(request)
if handler.code is not 200:
warnings.warn('Invalid Status Code', RuntimeWarning)
return handler.read()
version = get_latest_version()
It works flawlessly, but only 10% of the time. If I run it with a main like follows:
public static void main(String[] args)
{
for (int i = 0; i < 10; i++) {
System.out.println(getVersion());
}
}
It works the first time. It gives me the output that I want, which is the data from the http request that is written in my Versions.py file, which the java code above calls. After the second time, it throws this massive error (which is 950 lines long, but of course, I won't torture you guys). Here's the gist of it:
Aug 26, 2015 10:41:21 PM org.python.netty.util.concurrent.DefaultPromise execute
SEVERE: Failed to submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
My Python traceback that is supplied at the end of the 950 line Java stack trace is mostly this:
File "<string>", line 18, in get_latest_version
urllib2.URLError: <urlopen error [Errno -1] Unmapped exception: java.util.concurrent.RejectedExecutionException: event executor terminated>
If anyone is curious, the seemingly offending line in my get_latest_version is just:
handler = urllib2.urlopen(request)
Since the server that the code is calling is being run (by cherrypy) on the localhost on my network, I can see how it is interacting with my server. It actually sends two requests (and throws the exception right after the second).
127.0.0.1 - - [26/Aug/2015:22:41:21] "GET / HTTP/1.1" 200 3 "" "Python-urllib/2.7"
127.0.0.1 - - [26/Aug/2015:22:41:21] "GET / HTTP/1.1" 200 3 "" "Python-urllib/2.7"
While I'm never going to run this code in a loop likely, I'm quite curious as to two things:
Is the offending code my Python or Java code? Or could it just be an issue with Jython altogether?
What does the exception mean (it looks like a java exception)? Why is it being thrown when it is? Is there a way to make a loop like this work? Could this be written better?
|
The python library urllib2, which you use, uses Netty.
Netty has a problem, which is widely known:
Hopper: java.util.concurrent.RejectedExecutionException: event executor terminated
Error recurrent : DefaultPromise Failed to notify a listener. Event loop shut down?
Calling HttpClient.shutdown() causes problem to later created clients
Shutting down netty 4 application throws RejectedExecutionException
According to all of these links Netty HttpClient fails from time to time after closing. It looks like Netty recovers after some time and some applications work just normally with this problem. Anyway it looks unstable.
Q: Is the offending code my Python or Java code? Or could it just be an issue with Jython altogether?
A: The problem is caused by Jython library urllib2, which uses Netty.
Q: What does the exception mean (it looks like a java exception)? Why is it being thrown when it is?
A: urllib2 uses internally Netty. Netty is written in Java and throws this Java exception. Netty uses its own Thread Executor, which is shut down and unusable for some time after closing a request. You hit exactly this time.
Q: Is there a way to make a loop like this work? Could this be written better?
A: I would try to use Requests library.
|
How to save a Seaborn plot into a file
|
I tried the following code (test_seaborn.py):
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
import seaborn as sns
sns.set()
df = sns.load_dataset('iris')
sns_plot = sns.pairplot(df, hue='species', size=2.5)
fig = sns_plot.get_figure()
fig.savefig("output.png")
#sns.plt.show()
But I get this error:
Traceback (most recent call last):
File "test_searborn.py", line 11, in <module>
fig = sns_plot.get_figure()
AttributeError: 'PairGrid' object has no attribute 'get_figure'
I expect the final output.png will exist and look like this:
How can I resolve the problem?
|
Remove the get_figure and just use sns_plot.savefig('output.png')
df = sns.load_dataset('iris')
sns_plot = sns.pairplot(df, hue='species', size=2.5)
sns_plot.savefig("output.png")
|
Complexity of len() with regard to sets and lists
|
The complexity of len() with regards to sets and lists is equally O(1). How come it takes more time to process sets?
~$ python -m timeit "a=[1,2,3,4,5,6,7,8,9,10];len(a)"
10000000 loops, best of 3: 0.168 usec per loop
~$ python -m timeit "a={1,2,3,4,5,6,7,8,9,10};len(a)"
1000000 loops, best of 3: 0.375 usec per loop
Is it related to the particular benchmark, as in, it takes more time to build sets than lists and the benchmark takes that into account as well?
If the creation of a set object takes more time compared to creating a list, what would be the underlying reason?
|
Firstly, you have not measured the speed of len(), you have measured the speed of creating a list/set together with the speed of len().
Use the --setup argument of timeit:
$ python -m timeit --setup "a=[1,2,3,4,5,6,7,8,9,10]" "len(a)"
10000000 loops, best of 3: 0.0369 usec per loop
$ python -m timeit --setup "a={1,2,3,4,5,6,7,8,9,10}" "len(a)"
10000000 loops, best of 3: 0.0372 usec per loop
The statements you pass to --setup are run before measuring the speed of len().
Secondly, you should note that len(a) is a pretty quick statement. The process of measuring its speed may be subject to "noise". Consider that the code executed (and measured) by timeit is equivalent to the following:
for i in itertools.repeat(None, number):
len(a)
Because both len(a) and itertools.repeat(...).__next__() are fast operations and their speeds may be similar, the speed of itertools.repeat(...).__next__() may influence the timings.
For this reason, you'd better measure len(a); len(a); ...; len(a) (repeated 100 times or so) so that the body of the for loop takes a considerably higher amount of time than the iterator:
$ python -m timeit --setup "a=[1,2,3,4,5,6,7,8,9,10]" "$(for i in {0..1000}; do echo "len(a)"; done)"
10000 loops, best of 3: 29.2 usec per loop
$ python -m timeit --setup "a={1,2,3,4,5,6,7,8,9,10}" "$(for i in {0..1000}; do echo "len(a)"; done)"
10000 loops, best of 3: 29.3 usec per loop
(The results still says that len() has the same performances on lists and sets, but now you are sure that the result is correct.)
Thirdly, it's true that "complexity" and "speed" are related, but I believe you are making some confusion. The fact that len() has O(1) complexity for lists and sets does not imply that it must run with the same speed on lists and sets.
It means that, on average, no matter how long the list a is, len(a) performs the same asymptotic number of steps. And no matter how long the set b is, len(b) performs the same asymptotic number of steps. But the algorithm for computing the size of lists and sets may be different, resulting in different performances (timeit shows that this is not the case, however this may be a possibility).
Lastly,
If the creation of a set object takes more time compared to creating a list, what would be the underlying reason?
A set, as you know, does not allow repeated elements. Sets in CPython are implemented as hash tables (to ensure average O(1) insertion and lookup): constructing and maintaining a hash table is much more complex than adding elements to a list.
Specifically, when constructing a set, you have to compute hashes, build the hash table, look it up to avoid inserting duplicated events and so on. By contrast, lists in CPython are implemented as a simple array of pointers that is malloc()ed and realloc()ed as required.
|
Installing iPython: "ImportError cannot import name path"?
|
I'm trying to install IPython. I have run pip install ipython[notebook] without any errors, but now I get this:
$ ipython notebook
Traceback (most recent call last):
File "/Users/me/.virtualenvs/.venv/bin/ipython", line 7, in <module>
from IPython import start_ipython
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/IPython/__init__.py", line 48, in <module>
from .terminal.embed import embed
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/IPython/terminal/embed.py", line 16, in <module>
from IPython.core.interactiveshell import DummyMod
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 31, in <module>
from pickleshare import PickleShareDB
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/pickleshare.py", line 41, in <module>
from path import path as Path
ImportError: cannot import name path
I have the same error if I try to run import pickleshare at a Python console, or from path import path.
What can I do to fix this?
|
Looks like this is a known issue, caused by a change in the path.py package. Reverting to an older version of path.py solves this :
sudo pip3 install -I path.py==7.7.1
|
How to interactively display and hide lines in a Bokeh plot?
|
It would be nice to be able to interactively display and hide lines in a bokeh plot. Say, I have created my plot something like this:
from bokeh.plotting import output_file, figure, show
from numpy.random import normal, uniform
meas_data_1 = normal(0, 1, 100)
meas_data_2 = uniform(-0.5, 0.5, 100)
output_file("myplot.html", title="My plot")
fig = figure(width=500, plot_height=500)
fig.line(x=range(0, len(meas_data_1)), y=meas_data_1)
fig.line(x=range(0, len(meas_data_2)), y=meas_data_2)
show(fig)
How can I add the possibility to interactively enable/disable one of the two lines?
I know that this is on the wish list (see this feature request), but that doesn't sound like it would be implemented too soon.
I have the impression that this should be possible using a CheckBoxGroup and a self-defined callback, but unfortunately this callback has to be written in JavaScript, which I have absolutely no experience in.
|
This appears on track to be implemented at some point as interactive legends:
https://github.com/bokeh/bokeh/issues/3715
Currently (v0.12.1), there is an example that uses CustomJS on checkboxes to achieve this:
https://github.com/bokeh/bokeh/pull/4868
Relevant code:
import numpy as np
from bokeh.io import output_file, show
from bokeh.layouts import row
from bokeh.palettes import Viridis3
from bokeh.plotting import figure
from bokeh.models import CheckboxGroup, CustomJS
output_file("line_on_off.html", title="line_on_off.py example")
p = figure()
props = dict(line_width=4, line_alpha=0.7)
x = np.linspace(0, 4 * np.pi, 100)
l0 = p.line(x, np.sin(x), color=Viridis3[0], legend="Line 0", **props)
l1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend="Line 1", **props)
l2 = p.line(x, np.tan(x), color=Viridis3[2], legend="Line 2", **props)
checkbox = CheckboxGroup(labels=["Line 0", "Line 1", "Line 2"],
active=[0, 1, 2], width=100)
checkbox.callback = CustomJS(args=dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox),
lang="coffeescript", code="""
l0.visible = 0 in checkbox.active;
l1.visible = 1 in checkbox.active;
l2.visible = 2 in checkbox.active;
""")
layout = row(checkbox, p)
show(layout)
|
Why is bytearray not a Sequence in Python 2?
|
I'm seeing a weird discrepancy in behavior between Python 2 and 3.
In Python 3 things seem to work fine:
Python 3.5.0rc2 (v3.5.0rc2:cc15d736d860, Aug 25 2015, 04:45:41) [MSC v.1900 32 b
it (Intel)] on win32
>>> from collections import Sequence
>>> isinstance(bytearray(b"56"), Sequence)
True
But not in Python 2:
Python 2.7.10 (default, May 23 2015, 09:44:00) [MSC v.1500 64 bit (AMD64)] on wi
n32
>>> from collections import Sequence
>>> isinstance(bytearray("56"), Sequence)
False
The results seem to be consistent across minor releases of both Python 2.x and 3.x. Is this a known bug? Is it a bug at all? Is there any logic behind this difference?
I am actually more worried about the C API function PySequence_Check properly identifying an object of type PyByteArray_Type as exposing the sequence protocol, which by looking at the source code it seems like it should, but any insight into this whole thing is very welcome.
|
Abstract classes from collections use ABCMeta.register(subclass) to
Register subclass as a âvirtual subclassâ of this ABC.
In Python 3 issubclass(bytearray, Sequence) returns True because bytearray is explicitly registered as a subclass of ByteString (which is derived from Sequence) and MutableSequence. See the relevant part of Lib/_collections_abc.py:
class ByteString(Sequence):
"""This unifies bytes and bytearray.
XXX Should add all their methods.
"""
__slots__ = ()
ByteString.register(bytes)
ByteString.register(bytearray)
...
MutableSequence.register(bytearray) # Multiply inheriting, see ByteString
Python 2 doesn't do that (from Lib/_abcoll.py):
Sequence.register(tuple)
Sequence.register(basestring)
Sequence.register(buffer)
Sequence.register(xrange)
...
MutableSequence.register(list)
This behaviour was changed in Python 3.0 (in this commit specifically):
Add ABC ByteString which unifies bytes and bytearray (but not memoryview).
There's no ABC for "PEP 3118 style buffer API objects" because there's no
way to recognize these in Python (apart from trying to use memoryview()
on them).
And there's more information in PEP 3119:
This is a proposal to add Abstract Base Class (ABC) support to Python
3000. It proposes:
[...]
Specific ABCs for containers and iterators, to be added to the
collections module.
Much of the thinking that went into the proposal is not about the
specific mechanism of ABCs, as contrasted with Interfaces or Generic
Functions (GFs), but about clarifying philosophical issues like "what
makes a set", "what makes a mapping" and "what makes a sequence".
[...] a metaclass for use with ABCs that will allow us to add an ABC as a "virtual base class" (not the same concept as in C++) to any class, including to another ABC. This allows the standard library to define ABCs Sequence and MutableSequence and register these as virtual base classes for built-in types like basestring, tuple and list, so that for example the following conditions are all true: [...] issubclass(bytearray, MutableSequence).
Just FYI memoryview was registered as a subclass of Sequence only in Python 3.4:
There's no ducktyping for this due to the Sequence/Mapping confusion
so it's a simple missing explicit registration.
(see issue18690 for details).
PySequence_Check from Python C API does not rely on the collections module:
int
PySequence_Check(PyObject *s)
{
if (PyDict_Check(s))
return 0;
return s != NULL && s->ob_type->tp_as_sequence &&
s->ob_type->tp_as_sequence->sq_item != NULL;
}
It checks for non-zero tp_as_sequence field (example for bytearray) and if that succeeds, for non-zero sq_item field (which is basically getitem - example for bytearray).
|
Could not find a version that satisfies the requirement
|
I'm installing several Python packages in Ubuntu 12.04 using the following requirements.txt file:
numpy>=1.8.2,<2.0.0
matplotlib>=1.3.1,<2.0.0
scipy>=0.14.0,<1.0.0
astroML>=0.2,<1.0
scikit-learn>=0.14.1,<1.0.0
rpy2>=2.4.3,<3.0.0
and these two commands:
$ pip install --download=/tmp -r requirements.txt
$ pip install --user --no-index --find-links=/tmp -r requirements.txt
(the first one downloads the packages and the second one installs them).
The process is frequently stopped with the error:
Could not find a version that satisfies the requirement <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2)) (from versions: )
No matching distribution found for <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2))
which I fix manually with:
pip install --user <package>
and then run the second pip install command again.
But that only works for that particular package. When I run the second pip install command again, the process is stopped now complaining about another required package and I need to repeat the process again, ie: install the new required package manually (with the command above) and then run the second pip install command.
So far I've had to manually install six, pytz, nose, and now it's complaining about needing mock.
Is there a way to tell pip to automatically install all needed dependencies so I don't have to do it manually one by one?
Add: This only happens in Ubuntu 12.04 BTW. In Ubuntu 14.04 the pip install commands applied on the requirements.txt file work without issues.
|
This approach (having all dependencies in a directory and not downloading from an index) only works when the directory contains all packages. The directory should therefore contain all dependencies but also all packages that those dependencies depend on (e.g., six, pytz etc).
You should therefore manually include these in requirements.txt (so that the first step downloads them explicitly) or you should install all packages using PyPI and then pip freeze > requirements.txt to store the list of all packages needed.
|
What is the difference between the AWS boto and boto3
|
I'm new to AWS using Python and I'm trying to learn the boto API however I notice there are two major versions/packages for Python. That would be boto, and boto3.
I haven't been able to find an article with the major advantages/disadvantages or differences between these packages.
|
The boto package is the hand-coded Python library that has been around since 2006. It is very popular and is fully supported by AWS but because it is hand-coded and there are so many services available (with more appearing all the time) it is difficult to maintain.
So, boto3 is a new version of the boto library based on botocore. All of the low-level interfaces to AWS are driven from JSON service descriptions that are generated automatically from the canonical descriptions of the services. So, the interfaces are always correct and always up to date. There is a resource layer on top of the client-layer that provides a nicer, more Pythonic interface.
The boto3 library is being actively developed by AWS and is the one I would recommend people use if they are starting new development.
|
Numpy item faster than operator[]
|
I have a following code in python that at least for me produces strange results:
import numpy as np
import timeit
a = np.random.rand(3,2)
print timeit.timeit('a[2,1] + 1', 'from __main__ import a', number=1000000)
print timeit.timeit('a.item((2,1)) + 1', 'from __main__ import a', number=1000000)
This gives the result:
0.533630132675
0.103801012039
It seems ok if I only try to access numpy element but when increasing this element the timings get strange... Why is there such a difference in timings?
|
In this case, they don't return quite the same thing. a[2,1] returns a numpy.float64, while a.item((2,1)) returns a native python float.
Native vs numpy scalars (float, int, etc)
A numpy.float64 scalar isn't quite identical to a native python float (they behave identically, however). Simple operations on a single element will be faster with a native python float, as there's less indirection. Have a look at the docstring for ndarray.item for a bit more detail.
As an example of the difference in speed, consider the following:
In [1]: x = 1.2
In [2]: y = np.float64(1.2)
In [3]: %timeit x + 1
10000000 loops, best of 3: 58.9 ns per loop
In [4]: %timeit y + 1
1000000 loops, best of 3: 241 ns per loop
Initially, I incorrectly stated that a second factor was that a.item(...) was slightly faster than a[...]. That actually isn't true. The time it takes for a.item to convert the numpy scalar into a native python scalar overwhelms the time it takes for the additional logic in a[...]/a.__getitem__(...).
Don't generalize this result to more than one item
However, you should be careful about trying to generalize what happens with numpy scalars to how numpy arrays operate as a whole. If you're doing a lot of single-item indexing in numpy, it's generally an anti-pattern.
For example, compare:
In [5]: a = np.random.rand(1000)
In [6]: %timeit a + 1
100000 loops, best of 3: 2.32 us per loop
No matter what we do, we won't be able to match the speed (or much lower memory usage) of the vectorized version (a + 1) above:
In [7]: %timeit [x + 1 for x in a]
1000 loops, best of 3: 257 us per loop
In [8]: %timeit [a.item(i) + 1 for i in range(len(a))]
1000 loops, best of 3: 208 us per loop
Some of this is because iterating through ndarrays is slower than iterating through a list. For a completely fair comparison, let's convert everything over to a list of native python floats:
In [9]: b = a.tolist()
In [10]: type(b[0])
Out[10]: float
In [11]: %timeit [x + 1 for x in b]
10000 loops, best of 3: 69.4 us per loop
Clearly, using vectorized operations (the first case) is much faster when you're operating on larger arrays. It's also far more memory efficient, as lists require storing pointers to each item, while ndarrays are contiguous in memory.
|
How to use the same line of code in all functions?
|
I am newbie in Python.
I wonder if it is possible that all functions inherit the same line of code?
with open(filename, 'r') as f: as this line of code is the same in all three functions. Is it possible to inherit the code without using classes?
I tried to find the answer on stackoverflow and python documentation, but with no luck.
def word_count(filename):
with open(filename, 'r') as f:
return len(f.read().split())
def line_count(filename):
with open(filename, 'r') as f:
return len(f.read().splitlines())
def character_count(filename):
with open(filename, 'r') as f:
return len(f.read())
|
The common code in your case is
with open(filename, 'r') as f:
contents = f.read()
So just move it to its own function:
def get_file_contents(filename):
with open(filename, 'r') as f:
return f.read()
def word_count(filename):
return len(get_file_contents(filename).split())
def line_count(filename):
return len(get_file_contents(filename).splitlines())
def character_count(filename):
return len(get_file_contents(filename))
|
Using Spyder IDE, how do you return from "goto definition"?
|
Description of the problem:
I like to jump around code a lot with the keyboard but I am hitting a wall of usability in Spyder IDE. I can use the "goto definition" feature to jump to the definition of some function but then I can't go back to where my cursor was (so it takes a while to manually find where I was before because there might be many lines of code).
So for example there is a constant X=5 in the same file and when I use "goto definition" I can see what that constant is but then there is no way to go back. Or another example is a function from another file where "goto definition" takes me to that other file... but now I can't find the other file I was on (because there may be many files open).
In the 30+ year old vi you can goto the definition and return with ctrl-] and ctrl-t. In the 14+ year old Eclipse the equivalent to "goto definition" would be approximately F3 to go to the definition. And then to return would be alt-left.
running Spyder version 2.2.4.
Question:
Using Spyder IDE, can you return from "goto definition"? If you can, how do you return from "goto definition"?
What I've tried:
I have a keyboard shortcut for "previous cursor position" set to Alt Left but "previous cursor position" doesn't do anything when I hit the key. (The default keyboard shortcut is ctrl-alt-left which conflicts with the Cinnamon-dekstop-manager keyboard shortcut for switching workspaces and so I had to remap the above mentioned keyboard shortcut.)
|
Spyder have a one strange bug. Shortcut "Previous cursor position" only work if "Source toolbar" is present.
Turn on "View -> Toolbars -> Source toolbar". You can try it.
|
How to Include image or picture in jupyter notebook
|
I would like to include image in a jupyter notebook.
If I did the following, it works :
from IPython.display import Image
Image("img/picture.png")
But I would like to include the images in a markdown cell and the following code gives a 404 error :

I also tried

But I still get the same error :
404 GET /notebooks/%22/home/user/folder/img/picture.png%22 (127.0.0.1) 2.74ms referer=http://localhost:8888/notebooks/notebook.ipynb
|
You mustn't use quotation marks around the name of the image files in markdown!
If you carefully read your error message, you will see the two %22 parts in the link. That is the html encoded quotation mark.
You have to change the line

to

|
Precision difference when printing Python and C++ doubles
|
I'm currently marvelling over this:
C++ 11
#include <iostream>
#include <iomanip>
#include <limits>
int main()
{
double d = 1.305195828773568;
std::cout << std::setprecision(std::numeric_limits<double>::max_digits10) << d << std::endl;
// Prints 1.3051958287735681
}
Python
>>> repr(1.305195828773568)
'1.305195828773568'
What's going on, why the extra 1 in C++?
So far I thought that C++ and Python use the same 64 bit IEEE doubles under the hood; both formatting functions are supposed to print the full precision.
|
you can force python to print the 1 as well (and many more of the following digits):
print('{:.16f}'.format(1.305195828773568))
# -> 1.3051958287735681
from https://docs.python.org/2/tutorial/floatingpoint.html:
>>> 7205759403792794 * 10**30 // 2**56
100000000000000005551115123125L
In versions prior to Python 2.7 and Python 3.1, Python rounded this
value to 17 significant digits, giving â0.10000000000000001â. In
current versions, Python displays a value based on the shortest
decimal fraction that rounds correctly back to the true binary value,
resulting simply in â0.1â.
"print the full precision" is hard to do: what is the full precision? the representation of floats is binary; only fractions of powers of 2 can be represented exactly (to full precision); most decimal fractions can not be represented exactly in base 2.
but the float in the memory will be the same for python and c++; it is just the string representation that differs.
|
How to get reproducible results in keras
|
I get different results (test accuracy) every time I run the imdb_lstm.py example from Keras framework (https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py)
The code contains np.random.seed(1337) in the top, before any keras imports. It should prevent it from generating different numbers for every run. What am I missing?
UPDATE: How to repro:
Install Keras (http://keras.io/)
Execute https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py a few times. It will train the model and output test accuracy.
Expected result: Test accuracy is the same on every run.
Actual result: Test accuracy is different on every run.
UPDATE2: I'm running it on Windows 8.1 with MinGW/msys, module versions:
theano 0.7.0
numpy 1.8.1
scipy 0.14.0c1
UPDATE3: I narrowed the problem down a bit. If I run the example with GPU (set theano flag device=gpu0) then I get different test accuracy every time, but if I run it on CPU then everything works as expected. My graphics card: NVIDIA GeForce GT 635)
|
Theano's documentation talks about the difficulties of seeding random variables and why they seed each graph instance with its own random number generator.
Sharing a random number generator between different {{{RandomOp}}}
instances makes it difficult to producing the same stream regardless
of other ops in graph, and to keep {{{RandomOps}}} isolated.
Therefore, each {{{RandomOp}}} instance in a graph will have its very
own random number generator. That random number generator is an input
to the function. In typical usage, we will use the new features of
function inputs ({{{value}}}, {{{update}}}) to pass and update the rng
for each {{{RandomOp}}}. By passing RNGs as inputs, it is possible to
use the normal methods of accessing function inputs to access each
{{{RandomOp}}}âs rng. In this approach it there is no pre-existing
mechanism to work with the combined random number state of an entire
graph. So the proposal is to provide the missing functionality (the
last three requirements) via auxiliary functions: {{{seed, getstate,
setstate}}}.
They also provide examples on how to seed all the random number generators.
You can also seed all of the random variables allocated by a
RandomStreams object by that objectâs seed method. This seed will be
used to seed a temporary random number generator, that will in turn
generate seeds for each of the random variables.
>>> srng.seed(902340) # seeds rv_u and rv_n with different seeds each
|
Django app works fine, but getting a TEMPLATE_* warning message
|
When I use runserver, it gives this warning message:
(1_8.W001) The standalone TEMPLATE_* settings were deprecated in
Django 1.8 and the TEMPLATES dictionary takes precedence. You must put
the values of the following settings into your default TEMPLATES dict:
TEMPLATE_DEBUG.
Quoth the Django Documentation:
"TEMPLATE_DEBUG Deprecated since version 1.8: Set the 'debug' option
in the OPTIONS of a DjangoTemplates backend instead."
Here is my settings.py with my futile attempts to fix it:
DEBUG = True
TEMPLATE_DEBUG = DEBUG
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'myapp/templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'debug': DEBUG,
'DEBUG': DEBUG,
'TEMPLATE_DEBUG': DEBUG
},
}, ]
What am I missing here?
|
Set debug in OPTIONS dictionary of your templates settings.
DEBUG = True
TEMPLATES = [
{
...
'OPTIONS': {
'debug': DEBUG,
},
},
]
Then remove this line from your settings to stop the warnings
TEMPLATE_DEBUG = DEBUG
|
Spark performance for Scala vs Python
|
I prefer Python over Scala. But, as Spark is natively written in Scala, I was expecting my code to run faster in the Scala than the Python version for obvious reasons.
With that assumption, I thought to learn & write the Scala version of some very common preprocessing code for some 1 GB of data. Data is picked from the SpringLeaf competition on Kaggle. Just to give an overview of the data (it contains 1936 dimensions and 145232 rows). Data is composed of various types e.g. int, float, string, boolean. I am using 6 cores out of 8 for Spark processing; that's why I used minPartitions=6 so that every core has something to process.
Scala Code
val input = sc.textFile("train.csv", minPartitions=6)
val input2 = input.mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(1) else iter }
val delim1 = "\001"
def separateCols(line: String): Array[String] = {
val line2 = line.replaceAll("true", "1")
val line3 = line2.replaceAll("false", "0")
val vals: Array[String] = line3.split(",")
for((x,i) <- vals.view.zipWithIndex) {
vals(i) = "VAR_%04d".format(i) + delim1 + x
}
vals
}
val input3 = input2.flatMap(separateCols)
def toKeyVal(line: String): (String, String) = {
val vals = line.split(delim1)
(vals(0), vals(1))
}
val input4 = input3.map(toKeyVal)
def valsConcat(val1: String, val2: String): String = {
val1 + "," + val2
}
val input5 = input4.reduceByKey(valsConcat)
input5.saveAsTextFile("output")
Python Code
input = sc.textFile('train.csv', minPartitions=6)
DELIM_1 = '\001'
def drop_first_line(index, itr):
if index == 0:
return iter(list(itr)[1:])
else:
return itr
input2 = input.mapPartitionsWithIndex(drop_first_line)
def separate_cols(line):
line = line.replace('true', '1').replace('false', '0')
vals = line.split(',')
vals2 = ['VAR_%04d%s%s' %(e, DELIM_1, val.strip('\"')) for e, val in enumerate(vals)]
return vals2
input3 = input2.flatMap(separate_cols)
def to_key_val(kv):
key, val = kv.split(DELIM_1)
return (key, val)
input4 = input3.map(to_key_val)
def vals_concat(v1, v2):
return v1 + ',' + v2
input5 = input4.reduceByKey(vals_concat)
input5.saveAsTextFile('output')
Scala Performance
Stage 0 (38 mins), Stage 1 (18 sec)
Python Performance
Stage 0 (11 mins), Stage 1 (7 sec)
Both produces different DAG visualisation graphs (due to which both pics show different stage 0 functions for Scala (map) and Python (reduceByKey))
But, essentially both code tries to transform data into (dimension_id, string of list of values) RDD and save to disk. The output will be used to compute various statistics for each dimension.
Performance wise, Scala code for this real data like this seems to run 4 times slower than the Python version.
Good news for me is that it gave me good motivation to stay with Python. Bad news is I didn't quite understand why?
|
The original answer discussing the code can be found below.
First of all you have to distinguish between different types of API, each with its own performance consideration.
RDD API
(pure Python structures with JVM based orchestration)
This is the component which will be most affected by a performance of the Python code and the details of PySpark implementation. While Python performance is rather unlikely to be a problem there at least few factors you have to consider:
an overhead of JVM communication. Practically every data that comes to and from Python executor has to be passed through socket and JVM worker. While this is a relatively efficient local communication it is still not free.
process based executors (Python) versus thread based (single JVM multiple threads) executors (Scala). Each Python executor runs in its own process. As a side effect it provides stronger isolation than its JVM counterpart and some control over executor lifecycle but potentially significantly higher memory usage:
interpreter memory footrpint
footprint of the loaded libraries
less efficient broadcasting (each process requires its own copy of a broadcast)
performance of a Python code itself Generally speaking Scala is faster than a plain Python but it will vary on task to task. Moreover you have multiple options including JITs like Numba, C extensions (Cython) or specialized libraries like Theano. Finally, if you don't use ML / MLlib (or simply NumPy stack), consider using PyPy as an alternative interpreter. See SPARK-3094.
PySpark configuration provides spark.python.worker.reuse option which can be used to choose between forking Python process for each task and reusing existing process. The latter option seems to be useful to avoid expensive garbage collecting (it is more an impression than a result of systematic tests), the former one (default) is optimal for in case of expensive broadcasts and imports.
MLlib
(mixed Python and JVM execution)
Basic considerations are pretty much the same as before with a few additional issues. While basic structures used with MLlib are plain Python RDD objects all algorithms are executed directly using Scala.
It means additional cost of converting Python objects to Scala objects and the other way around, increased memory usage and some additional limitations we'll cover later.
DataFrame API and Spark ML
(JVM execution with Python code limited to the driver)
These are probably the best choice for standard data processing tasks. Since Python code is mostly limited to the high level logical operations on the driver there should be no performance difference between Python and Scala.
A single exception are Python UDFs which significantly less efficient than its Scala equivalents. While there are some chances for improvements (there has been substantial development in Spark 2.0.0) the biggest limitation is full roundtrip between internal representation (JVM) and Python interpreter. If it is only possible you should favor a compositions of built-in expressions. See for example Stack Overflow while processing several columns with a UDF. Python UDF behavior has been improved in Spark 2.0.0 but it is still suboptimal compared to native execution.
Also be sure to avoid unnecessary passing data between DataFrames and RDDs. This requires expensive serialization and deserialization not to mention data transfer to and from Python interpreter.
GraphX and Spark DataSets
As for now (Spark 1.6) neither one provides PySpark API so you can say that PySpark is infinitively worse than Scala (although introduction of GraphFrames makes the first one less important).
Streaming
From what I've seen so far I would strongly recommend using Scala over Python. It may change in the future if PySpark gets support for structured streams but right now Scala API seems to be much more robust, comprehensive and efficient. My experience is quite limited though.
Non-performance considerations
Feature parity
Not all Spark features are exposed through PySpark API. Be sure to check if the parts you need are already implemented and try to understand possible limitations.
It is particularly important when you use MLlib and similar mixed contexts (see How to use Java/Scala function from an action or a transformation?). To be fair some parts of the PySpark API, like mllib.linalg, provide far more comprehensive set of methods than Scala.
API design
PySpark API closely reflects its Scala counterpart and as such is not exactly Pythonic. It means that it is pretty easy to map between languages but at the same time Python code can be significantly harder to understand.
Complex architecture
PySpark data flow is relatively complex compared to pure JVM execution. It is much harder to reason about PySpark programs or debug. Moreover at least basic understanding of Scala and JVM in general is pretty much must have.
It doesn't have to be one vs. another
Spark DataFrame (SQL, Dataset) API provides an elegant way to integrate Scala / Java code in PySpark application. You can use DataFrames to expose data to a native JVM code and read back the results. I've explained some options somewhere else and you can find a working example of Python-Scala roundtrip in How to use a Scala class inside Pyspark.
It can be further augmented by introducing User Defined Types (see How to define schema for custom type in Spark SQL?).
What is wrong with a code provided in the question
(Disclaimer: Pythonista point of view. Most likely I've missed some Scala tricks)
First of all there is one part in your code which doesn't make sense at all. If you already have (key, value) pairs created using zipWithIndex or enumerate what is the point point in creating string just to split it right afterwards? flatMap doesn't work recursively so you can simply yield tuples and skip following map whatsoever.
Another part I find problematic is reduceByKey. Generally speaking reduceByKey is useful if applying aggregate function can reduce amount of data that has to be shuffled. Since you simply concatenate strings there is nothing to gain here. Ignoring low level stuff, like number of references, amount of data you have to transfer is exactly the same as for groupByKey.
Normally I wouldn't dwell on that, but as far as I can tell it is a bottleneck in your Scala code. Joining strings on JVM is rather expensive operation (See for example: Is string concatenation in scala as costly as it is in Java?). It means that something like this _.reduceByKey((v1: String, v2: String) => v1 + ',' + v2) which is equivalent to input4.reduceByKey(valsConcat) in your code is not a good idea.
If you want to avoid groupByKey you can try to use aggregateByKey with StringBuilder. Something similar to this should do the trick:
rdd.aggregateByKey(new StringBuilder)(
(acc, e) => {
if(!acc.isEmpty) acc.append(",").append(e)
else acc.append(e)
},
(acc1, acc2) => {
if(acc1.isEmpty | acc2.isEmpty) acc1.addString(acc2)
else acc1.append(",").addString(acc2)
}
)
but I doubt it is worth all the fuss.
Keeping above in mind I've rewritten your code as follows:
Scala:
val input = sc.textFile("train.csv", 6).mapPartitionsWithIndex{
(idx, iter) => if (idx == 0) iter.drop(1) else iter
}
val pairs = input.flatMap(line => line.split(",").zipWithIndex.map{
case ("true", i) => (i, "1")
case ("false", i) => (i, "0")
case p => p.swap
})
val result = pairs.groupByKey.map{
case (k, vals) => {
val valsString = vals.mkString(",")
s"$k,$valsString"
}
}
result.saveAsTextFile("scalaout")
Python:
def drop_first_line(index, itr):
if index == 0:
return iter(list(itr)[1:])
else:
return itr
def separate_cols(line):
line = line.replace('true', '1').replace('false', '0')
vals = line.split(',')
for (i, x) in enumerate(vals):
yield (i, x)
input = (sc
.textFile('train.csv', minPartitions=6)
.mapPartitionsWithIndex(drop_first_line))
pairs = input.flatMap(separate_cols)
result = (pairs
.groupByKey()
.map(lambda kv: "{0},{1}".format(kv[0], ",".join(kv[1]))))
result.saveAsTextFile("pythonout")
Results
In local[6] mode (Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz) with 4GB memory per executor it takes (n = 3):
Scala - mean: 250.00s, stdev: 12.49
Python - mean: 246.66s, stdev: 1.15
I am pretty sure that most of that time is spend on shuffling, serializing, deserializing and other secondary tasks. Just for fun a naive single-threaded code in Python that performs the same task on this machine in less than a minute:
def go():
with open("train.csv") as fr:
lines = [
line.replace('true', '1').replace('false', '0').split(",")
for line in fr]
return zip(*lines[1:])
|
wxPython threads blocking
|
This is in the Phoenix fork of wxPython.
I'm trying to run a couple threads in the interests of not blocking the GUI.
Two of my threads work fine, but the other one never seem to hit its bound result function. I can tell that it's running, it just doesn't seem to properly post the event.
Here's the result function for the main calculation threads:
def on_status_result(self, event):
if not self.panel.progress_bar.GetRange():
self.panel.progress_bar.SetRange(event.data.parcel_count)
self.panel.progress_bar.SetValue(event.data.current_parcel)
self.panel.status_label.SetLabel(event.data.message)
Here's how I'm binding them:
from wx.lib.pubsub.core import Publisher
PUB = Publisher()
Here's how I'm binding the method:
def post_event(message, data):
wx.CallAfter(lambda *a: Publisher().sendMessage(message, data=data))
And here are the threads. The first one does not work, but the second two do:
class PrepareThread(threading.Thread):
def __init__(self, notify_window):
threading.Thread.__init__(self)
self._notify_window = notify_window
self._want_abort = False
def run(self):
while not self._want_abort:
for status in prepare_collection(DATABASE, self._previous_id, self._current_id, self._year, self._col_type,
self._lock):
post_event('prepare.running', status)
post_event('prepare.complete', None)
return None
def abort(self):
self._want_abort = True
class SetupThread(threading.Thread):
def __init__(self, notify_window):
threading.Thread.__init__(self)
self._notify_window = notify_window
self._want_abort = False
def run(self):
while not self._want_abort:
do_more_stuff_with_the_database()
return None
def abort(self):
self._want_abort = True
class LatestCollectionsThread(threading.Thread):
def __init__(self, notify_window):
threading.Thread.__init__(self)
self._notify_window = notify_window
self._want_abort = False
def run(self):
while not self._want_abort:
do_stuff_with_my_database()
return None
def abort(self):
self._want_abort = True
prepare_collection is a function that yields Status objects that looks like this:
class Status:
def __init__(self, parcel_count, current_parcel, total, message):
self.parcel_count = parcel_count
self.current_parcel = current_parcel
self.total = total
self.message = message
Here's how I'm creating/starting/subscribing the PrepareThread:
MainForm(wx.Form):
prepare_thread = PrepareThread(self)
prepare_thread.start()
self.pub = Publisher()
self.pub.subscribe(self.on_status_result, 'prepare.running')
self.pub.subscribe(self.on_status_result, 'prepare.complete')
def on_status_result(self, event):
if not self.panel.progress_bar.GetRange():
self.panel.progress_bar.SetRange(event.data.parcel_count)
self.panel.progress_bar.SetValue(event.data.current_parcel)
self.panel.status_label.SetLabel(event.data.message)
I've tried stubbing out prepare_collection with range(10), but I still don't ever hit the event handler.
|
the problem is that the event system ends up calling the update function(event handler) from the threads themselves , you should pretty much never do that(basically you end up with strange race conditions and artifacts) ... always make the callback in the main thread.
wxPython has taken this into consideration and any methods called with wx.CallAfter will be called from the main program loop which is always running in the main thread. this combined with the wx.pubsub module allow you to create your own event frame work easily ... something like this
def MyPostEvent(event_name,event_data):
#just a helper that triggers the event with wx.CallAfter
wx.CallAfter(lambda *a:Publisher().sendMessage(event_name,data=event_data))
#then to post an event
MyPostEvent("some_event.i_made_up",{"payload":True})
#then in your main thread subscribe
def OnEventHandler(evt):
print "EVT.data",evt.data
pub = Publisher()
pub.subscribe("some_event.i_made_up",OnEventHandler)
|
How to use viridis in matplotlib 1.4
|
I want to use the colormap "viridis" (http://bids.github.io/colormap/), and I won't be updating to the development version 1.5 quite yet. Thus, I have downloaded colormaps.py from https://github.com/BIDS/colormap. Unfortunately, I'm not able to make it work. This is what I do:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import colormaps as cmaps
img=mpimg.imread('stinkbug.png')
lum_img = np.flipud(img[:,:,0])
plt.set_cmap(cmaps.viridis)
imgplot = plt.pcolormesh(lum_img)
This gives me a ValueError, the traceback ending with,
ValueError: Colormap viridis is not recognized. Possible values are: Spectral, summer, coolwarm, ...
(And then the complete list of originally installed colormaps.)
Any thoughts on how to fix this issue?
|
Rather than using set_cmap, which requires a matplotlib.colors.Colormap instance, you can set the cmap directly in the pcolormesh call
(cmaps.viridis is a matplotlib.colors.ListedColormap)
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import colormaps as cmaps
img=mpimg.imread('stinkbug.png')
lum_img = np.flipud(img[:,:,0])
imgplot = plt.pcolormesh(lum_img, cmap=cmaps.viridis)
|
Convert a 64 bit integer into 8 separate 1 byte integers in python
|
In python, I have been given a 64 bit integer. This Integer was created by taking several different 8 bit integers and mashing them together into one giant 64 bit integer. It is my job to separate them again.
For example:
Source number: 2592701575664680400
Binary (64 bits): 0010001111111011001000000101100010101010000101101011111000000000
int 1: 00100011 (35)
int 2: 11111011 (251)
int 3: 00100000 (32)
int 4: 01011000 (88)
int 5: 10101010 (170)
int 6: 00010110 (22)
int 7: 10111110 (190)
int 8: 00000000 (0)
So what I would like to do is take my source number 2592701575664680373 and return an array of length 8, where each int in the array are the ints listed above.
I was going to use struct, but to be perfectly honest, reading the documentation hasn't made it quite clear exactly how I would accomplish that.
|
Solution
Solution without converting the number to a string:
x = 0b0010001111111011001000000101100010101010000101101011111000000000
numbers = list((x >> i) & 0xFF for i in range(0,64,8))
print(numbers) # [0, 190, 22, 170, 88, 32, 251, 35]
print(list(reversed(numbers))) # [35, 251, 32, 88, 170, 22, 190, 0]
Explanation
Here I used list comprehensions, making a loop in increments of 8 over i. So i takes the values 0, 8, 16, 24, 32, 40, 48, 56.
Every time, the bitshift operator >> temporarily shifts the number x down by i bits. This is equivalent to dividing by 256^i.
So the resulting number is:
i = 0: 0010001111111011001000000101100010101010000101101011111000000000
i = 8: 00100011111110110010000001011000101010100001011010111110
i = 16: 001000111111101100100000010110001010101000010110
i = 24: 0010001111111011001000000101100010101010
i = 32: 00100011111110110010000001011000
i = 40: 001000111111101100100000
i = 48: 0010001111111011
i = 56: 00100011
By usig & 0xFF, I select the last 8 bits of this number. Example:
x >> 48: 001000111111101100100000
0xff: 11111111
(x >> 48) & 0xff: 000000000000000000100000
Since the leading zeros do not matter, you have the desired number.
The result is converted to a list and printed in normal and reversed order (like OP wanted it).
Performance
I compared the timing of this result to the other solutions proposed in this thread:
In: timeit list(reversed([(x >> i) & 0xFF for i in range(0,64,8)]))
100000 loops, best of 3: 13.9 µs per loop
In: timeit [(x >> (i * 8)) & 0xFF for i in range(7, -1, -1)]
100000 loops, best of 3: 11.1 µs per loop
In: timeit [(x >> i) & 0xFF for i in range(63,-1,-8)]
100000 loops, best of 3: 10.2 µs per loop
In: timeit reversed(struct.unpack('8B', struct.pack('Q', x)))
100000 loops, best of 3: 3.22 µs per loop
In: timeit reversed(struct.pack('Q', x))
100000 loops, best of 3: 2.07 µs per loop
Result: my solution is not the fastest!
Currently, using struct directly (as proposed by Mark Ransom) seems to be the fastest snippet.
|
Boto3/S3: Renaming an object using copy_object
|
I'm trying to rename a file in my s3 bucket using python boto3, I couldn't clearly understand the arguments. can someone help me here?
What I'm planing is to copy object to a new object, and then delete the actual object.
I found similar questions here, but I need a solution using boto3.
|
I found another solution
s3 = boto3.resource('s3')
s3.Object('my_bucket','my_file_new').copy_from(CopySource='my_bucket/my_file_old')
s3.Object('my_bucket','my_file_old').delete()
|
Generator as function argument
|
Can anyone explain why passing a generator as the only positional argument to a function seems to have special rules?
If we have:
>>> def f(*args):
>>> print "Success!"
>>> print args
This works, as expected.
>>> f(1, *[2])
Success!
(1, 2)
This does not work, as expected.
>>> f(*[2], 1)
File "<stdin>", line 1
SyntaxError: only named arguments may follow *expression
This works, as expected
>>> f(1 for x in [1], *[2])
Success!
(generator object <genexpr> at 0x7effe06bdcd0>, 2)
This works, but I don't understand why. Shouldn't it fail in the same way as 2)
>>> f(*[2], 1 for x in [1])
Success!
(generator object <genexpr> at 0x7effe06bdcd0>, 2)
|
Both 3. and 4. should be syntax errors on all Python versions. However you've found a bug that affects Python versions 2.5 - 3.4, and which was subsequently posted to the Python issue tracker. Because of the bug, an unparenthesized generator expression was accepted as an argument to a function if it was accompanied only by *args and/or **kwargs. While Python 2.6+ allowed both cases 3. and 4., Python 2.5 allowed only case 3. - yet both of them were against the documented grammar:
call ::= primary "(" [argument_list [","]
| expression genexpr_for] ")"
i.e. the documentation says a function call comprises of primary (the expression that evaluates to a callable), followed by, in parentheses, either an argument list or just an unparenthesized generator expression;
and within the argument list, all generator expressions must be in parentheses.
This bug (though it seems it had not been known), had been fixed in Python 3.5 prereleases. In Python 3.5 parentheses are always required around a generator expression, unless it is the only argument to the function:
Python 3.5.0a4+ (default:a3f2b171b765, May 19 2015, 16:14:41)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f(1 for i in [42], *a)
File "<stdin>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
This is now documented in the What's New in Python 3.5, thanks to DeTeReR spotting this bug.
Analysis of the bug
There was a change made to Python 2.6 which allowed the use of keyword arguments after *args:
Itâs also become legal to provide keyword arguments after a *args
argument to a function call.
>>> def f(*args, **kw):
... print args, kw
...
>>> f(1,2,3, *(4,5,6), keyword=13)
(1, 2, 3, 4, 5, 6) {'keyword': 13}
Previously this would have been a syntax error. (Contributed by Amaury
Forgeot dâArc; issue 3473.)
However, the Python 2.6 grammar does not make any distinction between keyword arguments, positional arguments, or bare generator expressions - they are all of type argument to the parser.
As per Python rules, a generator expression must be parenthesized if it is not the sole argument to the function. This is validated in the Python/ast.c:
for (i = 0; i < NCH(n); i++) {
node *ch = CHILD(n, i);
if (TYPE(ch) == argument) {
if (NCH(ch) == 1)
nargs++;
else if (TYPE(CHILD(ch, 1)) == gen_for)
ngens++;
else
nkeywords++;
}
}
if (ngens > 1 || (ngens && (nargs || nkeywords))) {
ast_error(n, "Generator expression must be parenthesized "
"if not sole argument");
return NULL;
}
However this function does not consider the *args at all - it specifically only looks for ordinary positional arguments and keyword arguments.
Further down in the same function, there is an error message generated for non-keyword arg after keyword arg:
if (TYPE(ch) == argument) {
expr_ty e;
if (NCH(ch) == 1) {
if (nkeywords) {
ast_error(CHILD(ch, 0),
"non-keyword arg after keyword arg");
return NULL;
}
...
But this again applies to arguments that are not unparenthesized generator expressions as evidenced by the else if statement:
else if (TYPE(CHILD(ch, 1)) == gen_for) {
e = ast_for_genexp(c, ch);
if (!e)
return NULL;
asdl_seq_SET(args, nargs++, e);
}
Thus an unparenthesized generator expression was allowed to slip pass.
Now in Python 3.5 one can use the *args anywhere in a function call, so
the Grammar was changed to accommodate for this:
arglist: argument (',' argument)* [',']
and
argument: ( test [comp_for] |
test '=' test |
'**' test |
'*' test )
and the for loop was changed to
for (i = 0; i < NCH(n); i++) {
node *ch = CHILD(n, i);
if (TYPE(ch) == argument) {
if (NCH(ch) == 1)
nargs++;
else if (TYPE(CHILD(ch, 1)) == comp_for)
ngens++;
else if (TYPE(CHILD(ch, 0)) == STAR)
nargs++;
else
/* TYPE(CHILD(ch, 0)) == DOUBLESTAR or keyword argument */
nkeywords++;
}
}
Thus fixing the bug.
However the inadvertent change is that the valid looking constructions
func(i for i in [42], *args)
and
func(i for i in [42], **kwargs)
where an unparenthesized generator precedes *args or **kwargs now stopped working.
To locate this bug, I tried various Python versions. In 2.5 you'd get SyntaxError:
Python 2.5.5 (r255:77872, Nov 28 2010, 16:43:48)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f(*[1], 2 for x in [2])
File "<stdin>", line 1
f(*[1], 2 for x in [2])
And this was fixed before some prerelease of Python 3.5:
Python 3.5.0a4+ (default:a3f2b171b765, May 19 2015, 16:14:41)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f(*[1], 2 for x in [2])
File "<stdin>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
However, the parenthesized generator expression, it works in Python 3.5, but it does not work not in Python 3.4:
f(*[1], (2 for x in [2]))
And this is the clue. In Python 3.5 the *splatting is generalized; you can use it anywhere in a function call:
>>> print(*range(5), 42)
0 1 2 3 4 42
So the actual bug (generator working with *star without parentheses) was indeed fixed in Python 3.5, and the bug could be found in that what changed between Python 3.4 and 3.5
|
How to set React to production mode when using Gulp
|
I need to run React in production mode, which presumably entails defining the following somewhere in the enviornment:
process.env.NODE_ENV = 'production';
The issue is that I'm running this behind Tornado (a python web-server), not Node.js. I also use Supervisord to manage the tornado instances, so it's not abundantly clear how to set this in the running environment.
I do however use Gulp to build my jsx files to javascript.
Is it possible to somehow set this inside Gulp? And if so, how do I check that React is running in production mode?
Here is my Gulpfile.js:
'use strict';
var gulp = require('gulp'),
babelify = require('babelify'),
browserify = require('browserify'),
browserSync = require('browser-sync'),
source = require('vinyl-source-stream'),
uglify = require('gulp-uglify'),
buffer = require('vinyl-buffer');
var vendors = [
'react',
'react-bootstrap',
'jquery',
];
gulp.task('vendors', function () {
var stream = browserify({
debug: false,
require: vendors
});
stream.bundle()
.pipe(source('vendors.min.js'))
.pipe(buffer())
.pipe(uglify())
.pipe(gulp.dest('build/js'));
return stream;
});
gulp.task('app', function () {
var stream = browserify({
entries: ['./app/app.jsx'],
transform: [babelify],
debug: false,
extensions: ['.jsx'],
fullPaths: false
});
vendors.forEach(function(vendor) {
stream.external(vendor);
});
return stream.bundle()
.pipe(source('build.min.js'))
.pipe(buffer())
.pipe(uglify())
.pipe(gulp.dest('build/js'));
});
gulp.task('watch', [], function () {
// gulp.watch(['./app/**/*.jsx'], ['app', browserSync.reload]);
gulp.watch(['./app/**/*.jsx'], ['app']);
});
gulp.task('browsersync',['vendors','app'], function () {
browserSync({
server: {
baseDir: './',
},
notify: false,
browser: ["google chrome"]
});
});
gulp.task('default',['browsersync','watch'], function() {});
|
Step I: Add the following to your gulpfile.js somewhere
gulp.task('apply-prod-environment', function() {
process.env.NODE_ENV = 'production';
});
Step II: Add it to your default task (or whichever task you use to serve/build your app)
// before:
// gulp.task('default',['browsersync','watch'], function() {});
// after:
gulp.task('default',['apply-prod-environment', 'browsersync','watch'], function() {});
OPTIONAL: If you want to be ABSOLUTELY CERTAIN that you are in prod mode, you can create the following slightly enhanced task instead of the one in Step I:
gulp.task('apply-prod-environment', function() {
process.stdout.write("Setting NODE_ENV to 'production'" + "\n");
process.env.NODE_ENV = 'production';
if (process.env.NODE_ENV != 'production') {
throw new Error("Failed to set NODE_ENV to production!!!!");
} else {
process.stdout.write("Successfully set NODE_ENV to production" + "\n");
}
});
Which will throw the following error if NODE_ENV is ever not set to 'production'
[13:55:24] Starting 'apply-prod-environment'...
[13:55:24] 'apply-prod-environment' errored after 77 μs
[13:55:24] Error: Failed to set NODE_ENV to production!!!!
|
pip doesn't work after upgrade
|
Today I upgraded from pip 7.1.0 to 7.1.2, and now it doesn't work.
$ pip search docker-compose
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 223, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/search.py", line 43, in run
pypi_hits = self.search(query, options)
File "/Library/Python/2.7/site-packages/pip/commands/search.py", line 60, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1240, in __call__
return self.__send(self.__name, args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1599, in __request
verbose=self.__verbose
File "/Library/Python/2.7/site-packages/pip/download.py", line 788, in request
return self.parse_response(response.raw)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1490, in parse_response
return u.close()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 799, in close
raise Fault(**self._stack[0])
Fault: <Fault 1: "<type 'exceptions.KeyError'>:'hits'">
So I tried reinstalling:
sudo -H pip install --force-reinstall -U pip
The reinstall ran without error, but when I tried to search, I got the same error.
So, I tried reinstalling the old version:
sudo -H pip install --force-reinstall -U pip==7.1.0
Again, the reinstall worked, but searching was still broken after the reinstall. In addition to the error, I did get the version upgrade message:
You are using pip version 7.1.0, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Disabling the cache also gives the same error:
pip search docker-compose --no-cache-dir --disable-pip-version-check
The problem seems to only be with the search function, as pip still functions well enough to reinstall itself and such.
I believe I have only installed one other package today, which was docker-compose. The problem occurs when I search for packages other than docker-compose, as in my examples.
Any ideas?
|
I wasn't able to reproduce this with pip 7.1.2 and either Python 2.7.8 or 3.5.1 on Linux.
The xmlrpclib docs have this to say on 'faults':
Method calls may also raise a special Fault instance, used to signal
XML-RPC server errors
This implies that pip is reporting a problem on the server (pypi) side.
The Python Infrastructure Status site reports problems with pip search on 2015-09-11 and 2015-09-12.
I suspect that this is not a bug in pip, but a problem with pypi.python.org on the dates in question.This question was asked on 2015-09-11.
A similar error was logged on the pypi bitbucket repo on 2015-09-11, reinforcing my theory.
Interestingly, there is another similar bug logged at pypi's github repo. In this case the search term is a regex:
pip search "^docker-compose$"
I can reproduce this error on Python 2.7.8 and Python3.5.1, pip-7.1.2 and pip-8.1.1 on Linux; however I can't see anything in the pip documentation to suggest that pip search supports regex, and this answer states regex is unsupported, so I think this is a separate issue unrelated to the OP's question.
|
Python stop multiple process when one returns a result?
|
I am trying to write a simple proof-of-work nonce-finder in python.
def proof_of_work(b, nBytes):
nonce = 0
# while the first nBytes of hash(b + nonce) are not 0
while sha256(b + uint2bytes(nonce))[:nBytes] != bytes(nBytes):
nonce = nonce + 1
return nonce
Now I am trying to do this multiprocessed, so it can use all CPU cores and find the nonce faster. My idea is to use multiprocessing.Pool and execute the function proof_of_work multiple times, passing two params num_of_cpus_running and this_cpu_id like so:
def proof_of_work(b, nBytes, num_of_cpus_running, this_cpu_id):
nonce = this_cpu_id
while sha256(b + uint2bytes(nonce))[:nBytes] != bytes(nBytes):
nonce = nonce + num_of_cpus_running
return nonce
So, if there are 4 cores, every one will calculate nonces like this:
core 0: 0, 4, 8, 16, 32 ...
core 1: 1, 5, 9, 17, 33 ...
core 2: 2, 6, 10, 18, 34 ...
core 3: 3, 7, 15, 31, 38 ...
So, I have to rewrite proof_of_work so when anyone of the processes finds a nonce, everyone else stops looking for nonces, taking into account that the found nonce has to be the lowest value possible for which the required bytes are 0. If a CPU speeds up for some reason, and returns a valid nonce higher than the lowest valid nonce, then the proof of work is not valid.
The only thing I don't know how to do is the part in which a process A will only stop if process B found a nonce that is lower than the nonce that is being calculated right now by process A. If its higher, A keeps calculating (just in case) until it arrives to the nonce provided by B.
I hope I explained myself correctly. Also, if there is a faster implementation of anything I wrote, I would love to hear about it. Thank you very much!
|
A general method to do this is to:
think of work packets, e.g. to perform the calculation for a particular range, a range should not take long, say 0.1 seconds to a second
have some manager distribute the work packets to the worker
after a work packet has been concluded, tell the manager the result and request a new work packet
if the work is done and a result has been found accept the results from workers and give them a signal that no more work is to be performed - the workers can now safely terminate
This way you don't have to check with the manager each iteration (which would slow down everything), or do nasty things like stopping a thread mid-session. Needless to say, the manager needs to be thread safe.
This fits perfectly with your model, as you still need the results of the other workers, even if a result has been found.
Note that in your model, it could be that a thread may go out of sync with the other threads, lagging behind. You don't want to do another million calculations once a result is found. I'm just reiterating this from the question because I think the model is wrong. You should fix the model instead of fixing the implementation.
|
How to print from Flask @app.route to python console
|
I would like to simply print a "hello world" to the python console after /button is called by the user.
This is my naive approach:
@app.route('/button/')
def button_clicked():
print 'Hello world!'
return redirect('/')
Background: I would like to execute other python commands from flask (not shell). "print" should be the easiest case.
I believe I have not understood a basic twist here.
Thanks in advance!
|
It seems like you have it worked out, but for others looking for this answer, an easy way to do this is by printing to stderr. You can do that like this:
from __future__ import print_function # In python 2.7
import sys
@app.route('/button/')
def button_clicked():
print('Hello world!', file=sys.stderr)
return redirect('/')
Flask will display things printed to stderr in the console. For other ways of printing to stderr, see this stackoverflow post
|
Flask validates decorator multiple fields simultaneously
|
I have been using the @validates decorator in sqlalchemy.orm from flask to validate fields, and all has gone well as long as all of the fields are independent of one another such as:
@validates('field_one')
def validates_field_one(self, key, value):
#field one validation
@validates('field_two')
def validates_field_two(self, key, value):
#field two validation
However, now I need to do some validation that will require access to field_one and field_two simultaneously. It looks like validates accepts multiple arguments to the validates decorator, however, it will simply run the validation function once for each argument, as such:
@validates('field_one', 'field_two')
def validates_fields(self, keys, values):
#field validation
Results in a work flow of validate field_one and then validate field_two. However, I would like to validate both at the same time(a trivial example of which would be assert that the value of field_one is not the value of field_two, an example of which would be disallowing self-loops in a graph where field_one and field_two refer to nodes and it is performing validation on an edge). How would be the best way to go about doing that?
|
Order the fields in the order they were defined on the model. Then check if the last field is the one being validated. Otherwise just return the value unchecked. If the validator is validating one of the earlier fields, some of them will not be set yet.
@validates('field_one', 'field_two')
def validates_fields(self, key, value):
if key == 'field_two':
assert self.field_one != value
return value
See this example.
|
What are Type hints in Python 3.5
|
One of the talked about features in Python 3.5 is said to be type hints.
An example of type hints is mentioned in this article and this while also mentioning to use type hints responsibly. Can someone explain more about it and when it should be used and when not?
|
I would suggest reading PEP 483 and PEP 484 and watching this presentation by Guido on Type Hinting. In addition, more examples on Type Hints can be found at their documentation topic.
In a nutshell: Type hinting is literally what it means, you hint the type of the object(s) you're using.
Due to the highly dynamic nature of Python, inferring or checking the type of an object being used is especially hard. This fact makes it hard for developers to understand what exactly is going on in code they haven't written and, most importantly, for type checking tools found in many IDEs [PyCharm, PyDev come to mind] that are limited due to the fact that they don't have any indicator of what type the objects are. As a result they resort to trying to infer the type with (as mentioned in the presentation) around 50% success rate.
To take two important slides from the Type Hinting presentation:
Why Type Hints?
Helps Type Checkers: By hinting at what type you want the object to be the type checker can easily detect if, for instance, you're passing an object with a type that isn't expected.
Helps with documentation: A third person viewing your code will know what is expected where, ergo, how to use it without getting them TypeErrors.
Helps IDEs develop more accurate and robust tools: Development Environments will be better suited at suggesting appropriate methods when know what type your object is. You have probably experienced this with some IDE at some point, hitting the . and having methods/attributes pop up which aren't defined for an object.
Why Static Type Checkers?
Find bugs sooner: This is self evident, I believe.
The larger your project the more you need it: Again, makes sense. Static languages offer a robustness and control that
dynamic languages lack. The bigger and more complex your application becomes the more control and predictability (from
a behavioral aspect) you require.
Large teams are already running static analysis: I'm guessing this verifies the first two points.
As a closing note for this small introduction: This is an optional feature and from what I understand it has been introduced in order to reap some of the benefits of static typing.
You generally do not need to worry about it and definitely don't need to use it (especially in cases where you use Python as an auxiliary scripting language). It should be helpful when developing large projects as it offers much needed robustness, control and additional debugging capabilities.
Type Hinting with mypy:
In order to make this answer more complete, I think a little demonstration would be suitable. I'll be using mypy, the library which inspired Type Hints as they are presented in the PEP. This is mainly written for anybody bumping into this question and wondering where to begin.
Before I do that let reiterate the following: PEP 484 doesn't enforce anything; it is simply setting a direction for function
annotations and proposing guidelines for how type checking can/should be performed. You can annotate your functions and
hint as many things as you want; your scripts will still run regardless of the presence of annotations.
Anyways, as noted in the PEP, hinting types should generally take three forms:
Function annotations. (PEP 3107)
Stub files for built-in/user modules. (Ideal future for type checking)
Special # type: type comments. (Complementing the first two forms)**
Additionally, you'll want to use type hints in conjunction with the new typing module introduced with Py3.5. The typing module will save your life in this situation; in it, many (additional) ABCs are defined along with helper functions and decorators for use in static checking. Most ABCs in collections.abc are included but in a Generic form in order to allow subscription (by defining a __getitem__() method).
For anyone interested in a more in-depth explanation of these, the mypy documentation is written very nicely and has a lot of code samples demonstrating/describing the functionality of their checker; it is definitely worth a read.
Function annotations and special comments:
First, it's interesting to observe some of the behavior we can get when using special comments. Special # type: type comments
can be added during variable assignments to indicate the type of an object if one cannot be directly inferred. Simple assignments are
generally easily inferred but others, like lists (with regard to their contents), cannot.
Note: If we want to use any derivative of Containers and need to specify the contents for that container we must use the generic types from the typing module. These support indexing.
# generic List, supports indexing.
from typing import List
# In this case, the type is easily inferred as type: int.
i = 0
# Even though the type can be inferred as of type list
# there is no way to know the contents of this list.
# By using type: List[str] we indicate we want to use a list of strings.
a = [] # type: List[str]
# Appending an int to our list
# is statically not correct.
a.append(i)
# Appending a string is fine.
a.append("i")
print(a) # [0, 'i']
If we add these commands to a file and execute them with our interpreter, everything works just fine and print(a) just prints
the contents of list a. The # type comments have been discarded, treated as plain comments which have no additional semantic meaning.
By running this with mypy, on the other hand, we get the following responce:
(Python3)jimmi@jim: mypy typeHintsCode.py
typesInline.py:14: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str"
Indicating that a list of str objects cannot contain an int, which, statically speaking, is sound. This can be fixed by either abiding to the type of a and only appending str objects or by changing the type of the contents of a to indicate that any value is acceptable (Intuitively performed with List[Any] after Any has been imported from typing).
Function annotations are added in the form param_name : type after each parameter in your function signature and a return type is specified using the -> type notation before the ending function colon; all annotations are stored in the __annotations__ attribute for that function in a handy dictionary form. Using a trivial example (which doesn't require extra types from the typing module):
def annotated(x: int, y: str) -> bool:
return x < y
The annotated.__annotations__ attribute now has the following values:
{'y': <class 'str'>, 'return': <class 'bool'>, 'x': <class 'int'>}
If we're a complete noobie, or we are familiar with Py2.7 concepts and are consequently unaware of the TypeError lurking in the comparison of annotated, we can perform another static check, catch the error and save us some trouble:
(Python3)jimmi@jim: mypy typeHintsCode.py
typeFunction.py: note: In function "annotated":
typeFunction.py:2: error: Unsupported operand types for > ("str" and "int")
Among other things, calling the function with invalid arguments will also get caught:
annotated(20, 20)
# mypy complains:
typeHintsCode.py:4: error: Argument 2 to "annotated" has incompatible type "int"; expected "str"
These can be extended to basically any use-case and the errors caught extend further than basic calls and operations. The types you
can check for are really flexible and I have merely given a small sneak peak of its potential. A look in the typing module, the
PEPs or the mypy docs will give you a more comprehensive idea of the capabilities offered.
Stub Files:
Stub files can be used in two different non mutually exclusive cases:
You need to type check a module for which you do not want to directly alter the function signatures
You want to write modules and have type-checking but additionally want to separate annotations from content.
What stub files (with an extension of .pyi) are is an annotated interface of the module you are making/want to use. They contain
the signatures of the functions you want to type-check with the body of the functions discarded. To get a feel of this, given a set
of three random functions in a module named randfunc.py:
def message(s):
print(s)
def alterContents(myIterable):
return [i for i in myIterable if i % 2 == 0]
def combine(messageFunc, itFunc):
messageFunc("Printing the Iterable")
a = alterContents(range(1, 20))
return set(a)
We can create a stub file randfunc.pyi, in which we can place some restrictions if we wish to do so. The downside is that
somebody viewing the source without the stub won't really get that annotation assistance when trying to understand what is supposed
to be passed where.
Anyway, the structure of a stub file is pretty simplistic: Add all function definitions with empty bodies (pass filled) and
supply the annotations based on your requirements. Here, let's assume we only want to work with int types for our Containers.
# Stub for randfucn.py
from typing import Iterable, List, Set, Callable
def message(s: str) -> None: pass
def alterContents(myIterable: Iterable[int])-> List[int]: pass
def combine(
messageFunc: Callable[[str], Any],
itFunc: Callable[[Iterable[int]], List[int]]
)-> Set[int]: pass
The combine function gives an indication of why you might want to use annotations in a different file, they some times clutter up
the code and reduce readability (big no-no for Python). You could of course use type aliases but that sometime confuses more than it
helps (so use them wisely).
This should get you familiarized with the basic concepts of Type Hints in Python. Even though the type checker used has been
mypy you should gradually start to see more of them pop-up, some internally in IDEs (PyCharm,) and others as standard python modules.
I'll try and add additional checkers/related packages in the following list when and if I find them (or if suggested).
Checkers I know of:
Mypy: as described here.
PyType: By Google, uses different notation from what I gather, probably worth a look.
Related Packages/Projects:
typeshed: Official Python repo housing an assortment of stub files for the standard library.
The typeshed project is actually one of the best places you can look to see how type hinting might be used in a project of your own. Let's take as an example the __init__ dunders of the Counter class in the corresponding .pyi file:
class Counter(Dict[_T, int], Generic[_T]):
@overload
def __init__(self) -> None: ...
@overload
def __init__(self, Mapping: Mapping[_T, int]) -> None: ...
@overload
def __init__(self, iterable: Iterable[_T]) -> None: ...
Where _T = TypeVar('_T') is used to define generic classes. For the Counter class we can see that it can either take no arguments in its initializer, get a single Mapping from any type to an int or take an Iterable of any type.
Notice: One thing I forgot to mention was that the typing module has been introduced on a provisional basis. From PEP 411:
A provisional package may have its API modified prior to "graduating" into a "stable" state. On one hand, this state provides the package with the benefits of being formally part of the Python distribution. On the other hand, the core development team explicitly states that no promises are made with regards to the the stability of the package's API, which may change for the next release. While it is considered an unlikely outcome, such packages may even be removed from the standard library without a deprecation period if the concerns regarding their API or maintenance prove well-founded.
So take things here with a pinch of salt; I'm doubtfull it will be removed or altered in significant ways but one can never know.
** Another topic altogether but valid in the scope of type-hints: PEP 526 is an effort to replace # type comments by introducing new syntax which allows users to annotate the type of variables in simple varname: type statements.
|
How to list all exceptions a function could raise in Python 3?
|
Is there a programmatic way to get a list of all exceptions a function could raise?
I know for example that os.makedirs(path[, mode]) can raise PermissionError (and maybe others), but the documentation only mentions OSError. (This is just an example - maybe even a bad one; I am not especially interested in this function - more in the problem in general).
Is there a programmatic way to find all the possible exceptions when they are not/poorly documented? This may be especially useful in 3rd-party libraries and libraries that do not ship with Python source code.
The solution presented in "Python: How can I know which exceptions might be thrown from a method call" does not work in Python 3; there is no compiler package.
|
You can't get reliable results for some (if not most) functions. Some examples:
functions that execute arbitrary code (e.g. exec(')(rorrEeulaV esiar'[::-1]) raises ValueError)
functions that aren't written in Python
functions that call other functions that can propagate errors to the caller
functions re-raising active exceptions in the except: block
Unfortunately, this list is incomplete.
E.g. os.makedirs is written in Python and you can see its source:
...
try:
mkdir(name, mode)
except OSError as e:
if not exist_ok or e.errno != errno.EEXIST or not path.isdir(name):
raise
Bare raise re-raises the last active exception (OSError or one of its subclasses). Here's the class hierarchy for OSError:
+-- OSError
| +-- BlockingIOError
| +-- ChildProcessError
| +-- ConnectionError
| | +-- BrokenPipeError
| | +-- ConnectionAbortedError
| | +-- ConnectionRefusedError
| | +-- ConnectionResetError
| +-- FileExistsError
| +-- FileNotFoundError
| +-- InterruptedError
| +-- IsADirectoryError
| +-- NotADirectoryError
| +-- PermissionError
| +-- ProcessLookupError
| +-- TimeoutError
To get the exact exception types you'll need to look into mkdir, functions it calls, functions those functions call etc.
So, getting possible exceptions without running the function is very hard and you really should not do it.
However for simple cases like
raise Exception # without arguments
raise Exception('abc') # with arguments
a combination of ast module functionality and inspect.getclosurevars (to get exception classes, was introduced in Python 3.3) can produce quite accurate results:
from inspect import getclosurevars, getsource
from collections import ChainMap
from textwrap import dedent
import ast, os
class MyException(Exception):
pass
def g():
raise Exception
class A():
def method():
raise OSError
def f(x):
int()
A.method()
os.makedirs()
g()
raise MyException
raise ValueError('argument')
def get_exceptions(func, ids=set()):
try:
vars = ChainMap(*getclosurevars(func)[:3])
source = dedent(getsource(func))
except TypeError:
return
class _visitor(ast.NodeTransformer):
def __init__(self):
self.nodes = []
self.other = []
def visit_Raise(self, n):
self.nodes.append(n.exc)
def visit_Expr(self, n):
if not isinstance(n.value, ast.Call):
return
c, ob = n.value.func, None
if isinstance(c, ast.Attribute):
parts = []
while getattr(c, 'value', None):
parts.append(c.attr)
c = c.value
if c.id in vars:
ob = vars[c.id]
for name in reversed(parts):
ob = getattr(ob, name)
elif isinstance(c, ast.Name):
if c.id in vars:
ob = vars[c.id]
if ob is not None and id(ob) not in ids:
self.other.append(ob)
ids.add(id(ob))
v = _visitor()
v.visit(ast.parse(source))
for n in v.nodes:
if isinstance(n, (ast.Call, ast.Name)):
name = n.id if isinstance(n, ast.Name) else n.func.id
if name in vars:
yield vars[name]
for o in v.other:
yield from get_exceptions(o)
for e in get_exceptions(f):
print(e)
prints
<class '__main__.MyException'>
<class 'ValueError'>
<class 'OSError'>
<class 'Exception'>
Keep in mind that this code only works for functions written in Python.
|
Simple way to measure cell execution time in ipython notebook
|
I would like to get the time spent on the cell execution in addition to the original output from cell.
To this end, I tried %%timeit -r1 -n1 but it doesn't expose the variable defined within cell.
%%time works for cell which only contains 1 statement.
In[1]: %%time
1
CPU times: user 4 µs, sys: 0 ns, total: 4 µs
Wall time: 5.96 µs
Out[1]: 1
In[2]: %%time
# Notice there is no out result in this case.
x = 1
x
CPU times: user 3 µs, sys: 0 ns, total: 3 µs
Wall time: 5.96 µs
What's the best way to do it?
|
Use cell magic and this project on github by Phillip Cloud:
Load it by putting this at the top of your notebook or put it in your config file if you always want to load it by default:
%install_ext https://raw.github.com/cpcloud/ipython-autotime/master/autotime.py
%load_ext autotime
If loaded, every output of subsequent cell execution will include the time in min and sec it took to execute it.
|
Does spark predicate pushdown work with JDBC?
|
According to this
Catalyst applies logical optimizations such as predicate pushdown. The
optimizer can push filter predicates down into the data source,
enabling the physical execution to skip irrelevant data.
Spark supports push down of predicates to the data source.
Is this feature also available / expected for JDBC?
(From inspecting the DB logs I can see it's not the default behavior right now - the full query is passed to the DB, even if it's later limited by spark filters)
MORE DETAILS
Running Spark 1.5 with PostgreSQL 9.4
code snippet:
from pyspark import SQLContext, SparkContext, Row, SparkConf
from data_access.data_access_db import REMOTE_CONNECTION
sc = SparkContext()
sqlContext = SQLContext(sc)
url = 'jdbc:postgresql://{host}/{database}?user={user}&password={password}'.format(**REMOTE_CONNECTION)
sql = "dummy"
df = sqlContext.read.jdbc(url=url, table=sql)
df = df.limit(1)
df.show()
SQL Trace:
< 2015-09-15 07:11:37.718 EDT >LOG: execute <unnamed>: SET extra_float_digits = 3
< 2015-09-15 07:11:37.771 EDT >LOG: execute <unnamed>: SELECT * FROM dummy WHERE 1=0
< 2015-09-15 07:11:37.830 EDT >LOG: execute <unnamed>: SELECT c.oid, a.attnum, a.attname, c.relname, n.nspname, a.attnotnull OR (t.typtype = 'd' AND t.typnotnull), pg_catalog.pg_get_expr(d.adbin, d.a
drelid) LIKE '%nextval(%' FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON (c.relnamespace = n.oid) JOIN pg_catalog.pg_attribute a ON (c.oid = a.attrelid) JOIN pg_catalog.pg_type t ON (a.a
tttypid = t.oid) LEFT JOIN pg_catalog.pg_attrdef d ON (d.adrelid = a.attrelid AND d.adnum = a.attnum) JOIN (SELECT 15218474 AS oid , 1 AS attnum UNION ALL SELECT 15218474, 3) vals ON (c.oid = vals.oid
AND a.attnum = vals.attnum)
< 2015-09-15 07:11:40.936 EDT >LOG: execute <unnamed>: SET extra_float_digits = 3
< 2015-09-15 07:11:40.964 EDT >LOG: execute <unnamed>: SELECT "id","name" FROM dummy
I would expect that the last select will include a limit 1 clause - but it doesn't
|
Spark DataFrames support predicate push-down with JDBC sources but term predicate is used in a strict SQL meaning. It means it covers only WHERE clause. Moreover it looks like it is limited to the logical conjunction (no IN and OR I am afraid) and simple predicates.
Everything else, like limits, counts, ordering, groups and conditions is processed on the Spark side. One caveat, already covered on SO, is that df.count() or sqlContext.sql("SELECT COUNT(*) FROM df") is translated to SELECT 1 FROM df and requires both substantial data transfer and processing using Spark.
Does it mean it is a lost cause? Not exactly. It is possible to use an arbitrary subquery as a table argument. It is less convenient than a predicate pushdown but otherwise works pretty well:
n = ... # Number of rows to take
sql = "(SELECT * FROM dummy LIMIT {0}) AS tmp".format(int(n))
df = sqlContext.read.jdbc(url=url, table=sql)
|
Python not working in the command line of git bash
|
Python will not run in git bash (Windows). When I type python in the command line, it takes me to a blank line without saying that it has entered python 2.7.10 like its does in Powershell. It doesn't give me an error message, but python just doesn't run.
I have already made sure the environmental variables in PATH included c:\python27. What else can I check?
A session wherein this issue occurs looks like the following:
user@hostname MINGW64 ~
$ type python
python is /c/Python27/python
user@hostname MINGW64 ~
$ python
...sitting there without returning to the prompt.
|
This is a known bug in MSys2, which provides the terminal used by Git Bash. You can work around it by running a Python build without ncurses support, or by using WinPTY, used as follows:
To run a Windows console program in mintty or Cygwin sshd, prepend console.exe to the command-line:
$ build/console.exe c:/Python27/python.exe
Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 10 + 20
30
>>> exit()
The prebuilt binaries for msys are likely to work with Git Bash. (Do check whether there's a newer version if significant time has passed since this answer was posted!).
As of Git for Windows 2.7.1, also try using winpty c:Python27/python.exe; WinPTY may be included out-of-the-box.
|
Can you fix the false negative rate in a classifier in scikit learn
|
I am using a Random Forest classifer in scikit learn with an imbalanced data set of two classes. I am much more worried about false negatives than false positives. Is it possible to fix the false negative rate (to, say, 1%) and ask scikit to optimize the false positive rate somehow?
If this classifier doesn't support it, is there another classifier that does?
|
I believe the problem of class imbalance in sklearn can be partially resolved by using the class_weight parameter.
this parameter is either a dictionary, where each class is assigned a uniform weight, or is a string that tells sklearn how to build this dictionary. For instance, setting this parameter to 'auto', will weight each class in proportion of the inverse of its frequency.
By weighting the class that is less present with a higher amount, you can end up with 'better' results.
Classifier like like SVM or logistic regression also offer this class_weight parameter.
This Stack Overflow answer gives some other ideas on how to handle class imbalance, like under sampling and oversampling.
|
Why is a class __dict__ a mappingproxy?
|
I wonder why a class __dict__ is a mappingproxy, but an instance __dict__ is just a plain dict
>>> class A:
pass
>>> a = A()
>>> type(a.__dict__)
<class 'dict'>
>>> type(A.__dict__)
<class 'mappingproxy'>
|
This helps the interpreter assure that the keys for class-level attributes and methods can only be strings.
Elsewhere, Python is a "consenting adults language", meaning that dicts for objects are exposed and mutable by the user. However, in the case of class-level attributes and methods for classes, if we can guarantee that the keys are strings, we can simplify and speed-up the common case code for attribute and method lookup at the class-level. In particular, the __mro__ search logic for new-style classes is simplified and sped-up by assuming the class dict keys are strings.
|
Python's equivalent of Ruby's ||=
|
To check if a variable exist, and if exits, use the original value, other wise, use the new value assigned.
In ruby, it's
var ||= var_new
How to write it in python?
PS:
I don't know the name of ||=, I simply can't search it in Bing.
|
I think there is some confusion from the people who aren't really sure what the conditional assignment operator (||=) does, and also some misunderstanding about how variables are spawned in Ruby.
Everyone should read this article on the subject. A TLDR quote:
A common misconception is that a ||= b is equivalent to a = a || b, but it behaves like a || a = b
In a = a || b, a is set to something by the statement on every run, whereas with a || a = b, a is only set if a is logically false (i.e. if it's nil or false) because || is 'short circuiting'. That is, if the left hand side of the || comparison is true, there's no need to check the right hand side.
And another very important note:
...a variable assignment, even if not run, immediately summons that variable into being.
# Ruby
x = 10 if 2 == 5
puts x
Even though the first line won't be run, x will exist on the second line and no exception will be raised.
This means that Ruby will absolutely ensure that there is a variable container for a value to be placed into before any righthand conditionals take place. ||= doesn't assign if a is not defined, it assigns if a is falsy (again, false or nil - nil being the default nothingness value in Ruby), whilst guaranteeing a is defined.
What does this mean for Python?
Well, if a is defined, the following:
# Ruby
a ||= 10
is actually equivalent to:
# Python
if not a:
a = 10
while the following:
# Either language
a = a or 10
is close, but it always assigns a value, whereas the previous examples do not.
And if a is not defined the whole operation is closer to:
# Python
a = None
if not a:
a = 10
Because a very explicit example of what a ||= 10 does when a is not defined would be:
# Ruby
if not defined? a
a = nil
end
if not a
a = 10
end
At the end of the day, the ||= operator is not completely translatable to Python in any kind of 'Pythonic' way, because of how it relies on the underlying variable spawning in Ruby.
|
Create Spark DataFrame. Can not infer schema for type:
|
Could someone help me solve this problem I have with spark DataFrame?
When I do myFloatRDD.toDF() i get an error:
TypeError: Can not infer schema for type: type 'float'
I don't understand why...
example:
myFloatRdd = sc.parallelize([1.0,2.0,3.0])
df = myFloatRdd.toDF()
Thanks
|
SqlContext.createDataFrame, which is used under the hood, requires an RDD of Row/tuple/list/dict* or pandas.DataFrame. Try something like this:
myFloatRdd.map(lambda x: (x, )).toDF()
or even better:
from pyspark.sql import Row
row = Row("val") # Or some other column name
myFloatRdd.map(row).toDF()
* No longer supported.
|
What are the pitfalls of using Dill to serialise scikit-learn/statsmodels models?
|
I need to serialise scikit-learn/statsmodels models such that all the dependencies (code + data) are packaged in an artefact and this artefact can be used to initialise the model and make predictions. Using the pickle module is not an option because this will only take care of the data dependency (the code will not be packaged). So, I have been conducting experiments with Dill. To make my question more precise, the following is an example where I build a model and persist it.
from sklearn import datasets
from sklearn import svm
from sklearn.preprocessing import Normalizer
import dill
digits = datasets.load_digits()
training_data_X = digits.data[:-5]
training_data_Y = digits.target[:-5]
test_data_X = digits.data[-5:]
test_data_Y = digits.target[-5:]
class Model:
def __init__(self):
self.normalizer = Normalizer()
self.clf = svm.SVC(gamma=0.001, C=100.)
def train(self, training_data_X, training_data_Y):
normalised_training_data_X = normalizer.fit_transform(training_data_X)
self.clf.fit(normalised_training_data_X, training_data_Y)
def predict(self, test_data_X):
return self.clf.predict(self.normalizer.fit_transform(test_data_X))
model = Model()
model.train(training_data_X, training_data_Y)
print model.predict(test_data_X)
dill.dump(model, open("my_model.dill", 'w'))
Corresponding to this, here is how I initialise the persisted model (in a new session) and make a prediction. Note that this code does not explicitly initialise or have knowledge of the class Model.
import dill
from sklearn import datasets
digits = datasets.load_digits()
training_data_X = digits.data[:-5]
training_data_Y = digits.target[:-5]
test_data_X = digits.data[-5:]
test_data_Y = digits.target[-5:]
with open("my_model.dill") as model_file:
model = dill.load(model_file)
print model.predict(test_data_X)
Has anyone used Dill isn this way?. The idea is for a data scientist to extend a ModelWrapper class for each model they implement and then build the infrastructure around this that persists the models, deploy the models as services and manage the entire lifecycle of the model.
class ModelWrapper(object):
__metaclass__ = abc.ABCMeta
def __init__(self, model):
self.model = model
@abc.abstractmethod
def predict(self, input):
return
def dumps(self):
return dill.dumps(self)
def loads(self, model_string):
self.model = dill.loads(model_string)
Other than the security implications (arbitrary code execution) and the requirement that modules like scikit-learn will have to be installed on the machine thats serving the model, are there and any other pitfalls in this approach? Any comments or words of advice would be most helpful.
I think that YHat and Dato have taken similar approach but rolled out there own implementations of Dill for similar purposes.
|
Ok to begin with, in your sample code pickle could work fine, I use pickle all the time to package a model and use it later, unless you want to send the model directly to another server or save the interpreter state, because that is what Dill is good at and pickle can not do. It also depends on your code, what types etc. you use, pickle might fail, Dill is more stable.
Dill is primarly based on pickle and so they are very similar, some things you should take into account / look into:
Limitations of Dill
frame, generator, traceback standard types can not be packaged.
cloudpickle might be a good idea for your problem as well, it has better support in pickling objects (than pickle, not per see better than Dill) and you can pickle code easily as well.
Once the target machine has the correct libraries loaded, (be carefull for different python versions as well, because they may bug your code), everything should work fine with both Dill and cloudpickle, as long as you do not use the unsuported standard types.
Hope this helps.
|
Django 1.9 ImportError for import_module
|
When trying to run either runserver or shell using manage.py I get an ImportError exception. I'm using Django 1.9.
ImportError: No module named 'django.utils.importlib'
|
django.utils.importlib is a compatibility library for when Python 2.6 was still supported. It has been obsolete since Django 1.7, which dropped support for Python 2.6, and is removed in 1.9 per the deprecation cycle.
Use Python's import_module function instead:
from importlib import import_module
The reason you can import it from django.utils.module_loading is that importlib.import_module is imported in that module, it is not because module_loading in any way defines the actual function.
Since django.utils.module_loading.import_module is not part of the public API, it can be removed at any time if it is no longer used - even in a minor version upgrade.
|
Optimization Break-even Point: iterate many times over set or convert to list first?
|
Here's something I've always wondered about. I'll pose the question for Python, but I would also welcome answers which address the standard libraries in Java and C++.
Let's say you have a Python list called "my_list", and you would like to iterate over its unique elements. There are two natural approaches:
#iterate over set
for x in set(my_list):
do_something(x)
or
#list to set to list
for x in list(set(my_list)):
do_something(x)
The tension is that iterating over a list is faster than iterating over a set, but it takes time to convert a set to a list. My guess is that the answer to this question will depend on a number of factors, such as:
How many times will we need to iterate?
How big is the original list?
How many repetitions in the original list should we expect?
So I guess I'm looking for a rule of thumb of the form "If the list has x many elements with each element repeated no more than y times and you only need to iterate z times then you should iterate over the set; otherwise you should convert it to a list."
|
I'm looking for a rule of thumb...
Rule of thumb
Here's the best rule of thumb for writing optimal Python: use as few intermediary steps as possible and avoid materializing unnecessary data structures.
Applied to this question: sets are iterable. Don't convert them to another data structure just to iterate over them. Trust Python to know the fastest way to iterate over sets. If it were faster to convert them to lists, Python would do that.
On optimization:
Don't attempt to prematurely optimize by adding complexity to your program. If your program takes too long, profile it, then optimize the bottlenecks. If you're using Python, you're probably more concerned about development time than exactly how long your program takes to run.
Demonstration
In Python 2.7:
import collections
import timeit
blackhole = collections.deque(maxlen=0).extend
s = set(xrange(10000))
We see for larger n, simpler is better:
>>> timeit.timeit(lambda: blackhole(s))
108.87403416633606
>>> timeit.timeit(lambda: blackhole(list(s)))
189.0135440826416
And for smaller n the same relationship holds:
>>> s = set(xrange(10))
>>> timeit.timeit(lambda: blackhole(s))
0.2969839572906494
>>> timeit.timeit(lambda: blackhole(list(s)))
0.630713939666748
Yes, lists iterate faster than sets (try it on your own Python interpreter):
l = list(s)
timeit.repeat(lambda: blackhole(l))
But that doesn't mean you should convert sets to lists just for iteration.
Break-even Analysis
OK, so you've profiled your code and found you're iterating over a set a lot (and I presume the set is static, else what we're doing is very problematic). I hope you're familiar with the set methods, and aren't replicating that functionality. (I also think you should consider linking frozensets with tuples, because using a list (mutable) to substitute for a canonical set (also mutable) seems like it could be error-prone.) So with that caveat, let's do an analysis.
It may be that by making an investment in complexity and taking greater risks of errors from more lines of code you can get a good payoff. This analysis will demonstrate the breakeven point on this. I do not know how much more performance you would need to pay for the greater risks and dev time, but this will tell you at what point you can start to pay towards those.:
import collections
import timeit
import pandas as pd
BLACKHOLE = collections.deque(maxlen=0).extend
SET = set(range(1000000))
def iterate(n, iterable):
for _ in range(n):
BLACKHOLE(iterable)
def list_iterate(n, iterable):
l = list(iterable)
for _ in range(n):
BLACKHOLE(l)
columns = ('ConvertList', 'n', 'seconds')
def main():
results = {c: [] for c in columns}
for n in range(21):
for fn in (iterate, list_iterate):
time = min(timeit.repeat((lambda: fn(n, SET)), number=10))
results['ConvertList'].append(fn.__name__)
results['n'].append(n)
results['seconds'].append(time)
df = pd.DataFrame(results)
df2 = df.pivot('n', 'ConvertList')
df2.plot()
import pylab
pylab.show()
And it looks like your break-even point is at 5 complete iterations. With 5 or less on average, it could not possibly make sense to do this. Only at 5 or more do you begin to compensate for the additional development time, complexity (increasing maintenance costs), and risk from the greater number of lines of code.
I think you'd have to be doing it quite a lot to be worth tacking on the added complexity and lines of code to your project.
These results were created with Anaconda's Python 2.7, from an Ubuntu 14.04 terminal. You may get varying results with different implementations and versions.
Concerns
What I'm concerned about is that sets are mutable, and lists are mutable. A set will prevent you from modifying it while iterating over it, but a list created from that set will not:
>>> s = set('abc')
>>> for e in s:
... s.add(e + e.upper())
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Set changed size during iteration
If you modify your set while iterating over the derivative list, you won't get an error to tell you that you did that.
>>> for e in list(s):
... s.add(e + e.upper())
That's why I also suggested to use frozensets and tuples instead. It will be a builtin guard against a semantically incorrect alteration of your data.
>>> s = frozenset('abc')
>>> t_s = tuple(s)
>>> for e in t_s:
... s.add(e + e.upper())
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AttributeError: 'frozenset' object has no attribute 'add'
In the end, you have to trust yourself to get your algorithm correct. Frequently, I'm told I gave good advice when I warn newer Python users about these sorts of things. They learn it was good advice because they didn't listen at first, and found it created unnecessary complexity, complications, and resulting problems. But there's also things like logical correctness that you'll only have yourself to blame if you don't get right. Minimizing things that can go wrong is a benefit that is usually worth the performance tradeoff.
And again, if performance (and not correctness or speed of development) was a prime concern while tackling this project, you wouldn't be using Python.
|
How to add a constant column in a Spark DataFrame?
|
I want to add a column in a DataFrame with some arbitrary value (that is the same for each row). I get an error when I use withColumn as follows:
dt.withColumn('new_column', 10).head(5)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-50-a6d0257ca2be> in <module>()
1 dt = (messages
2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt")))
----> 3 dt.withColumn('new_column', 10).head(5)
/Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col)
1166 [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)]
1167 """
-> 1168 return self.select('*', col.alias(colName))
1169
1170 @ignore_unicode_prefix
AttributeError: 'int' object has no attribute 'alias'
It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case):
dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5)
[Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)]
This is supremely hacky, right? I assume there is a more legit way to do this?
|
The second argument for DataFrame.withColumn should be a Column so you have to use a literal:
from pyspark.sql.functions import lit
df.withColumn('new_column', lit(10))
If you need complex columns you can build these using blocks like array:
from pyspark.sql.functions import array, struct
df.withColumn("some_array", array(lit(1), lit(2), lit(3)))
df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3)))
Exactly the same methods can be used in Scala.
import org.apache.spark.sql.functions.lit
df.withColumn('new_column', lit(10))
It is also possible, although slower, to use an UDF.
|
'PipelinedRDD' object has no attribute 'toDF' in PySpark
|
I'm trying to load an SVM file and convert it to a DataFrame so I can use the ML module (Pipeline ML) from Spark.
I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no spark-env.sh configured).
My my_script.py is:
from pyspark.mllib.util import MLUtils
from pyspark import SparkContext
sc = SparkContext("local", "Teste Original")
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()
and I'm running using: ./spark-submit my_script.py
And I get the error:
Traceback (most recent call last):
File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/pipeline_teste_original.py", line 34, in <module>
data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF()
AttributeError: 'PipelinedRDD' object has no attribute 'toDF'
What I can't understand is that if I run:
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()
directly inside pyspark shell, it works.
|
toDF method is a monkey patch executed inside SQLContext (SparkSession constructor in 2.0+) constructor so to be able to use it you have to create a SQLContext (or SparkSession) first:
from pyspark.sql import SQLContext # or from pyspark.sql import HiveContext
rdd = sc.parallelize([("a", 1)])
hasattr(rdd, "toDF")
## False
sqlContext = SQLContext(sc) # or HiveContext
hasattr(rdd, "toDF")
## True
rdd.toDF().show()
## +---+---+
## | _1| _2|
## +---+---+
## | a| 1|
## +---+---+
Not to mention you need a SQLContext to work with DataFrames anyway.
|
Selenium unexpectedly having issues
|
I have been using selenium now for a while on a number of projects.
With code that was running I am now receiving the following error:
C:\Users\%USER%\Miniconda\python.exe C:/Users/%USER%/PycharmProjects/c_r/quick_debug.py
Traceback (most recent call last):
File "C:/Users/%USER%/PycharmProjects/c_r/quick_debug.py", line 17, in <module>
c.setUp()
File "C:\Users\%USER%\PycharmProjects\c_r\c.py", line 40, in setUp
self.driver = webdriver.Chrome()
File "C:\Users\%USER%\Miniconda\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 67, in __init__
desired_capabilities=desired_capabilities)
File "C:\Users\%USER%\Miniconda\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 87, in __init__
self.start_session(desired_capabilities, browser_profile)
File "C:\Users\%USER%\Miniconda\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 141, in start_session
'desiredCapabilities': desired_capabilities,
File "C:\Users\%USER%\Miniconda\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 201, in execute
self.error_handler.check_response(response)
File "C:\Users\%USER%\Miniconda\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 181, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: unrecognized Blink revision: 3b3c00f2d95c45cca18ab944acced413fb759311
(Driver info: chromedriver=2.10.267521,platform=Windows NT 6.3 x86_64)
Process finished with exit code 1
where c.setUp() is:
def setUp(self):
self.driver = webdriver.Chrome()
Again - this is code that WAS running and I am unsure how to this "Unrecognized blink revision" error.
Nothing has consciously changed.
Thank you for any pointers
|
After having a quick look at the source code, I think this is a compatibility issue between ChromeDriver and Chrome itself - I suspect your Chrome auto-updated and now is too new for ChromeDriver 2.10. In other words: update ChromeDriver, latest is currently 2.19.
|
PyCrypto on python 3.5
|
i found some exes of PyCrypto for python 3.3 and 3.4, but nothing for python 3.5.
When i try to install PyCrypton using pip install, it says:
warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath.
Is there any way how to install PyCrypto on python 3.5 in Windows 10? Thanks!
|
That warning shouldn't stop the build, more likely you are lacking the Visual Studio 2015 compiler which is necessary to build binary extensions (which PyCrypto has). See the Python Packaging User Guide for which compiler you need for your version of Python.
The reason you need the compiler is PyCrypto only offers a Source Distribution officially. So, you have two options for installing PyCrypto:
1. Build from source:
This is actually fairly simple, but it requires that you install some extra software. Again, refer to the Python Packaging User Guide linked above to find the suitable compiler for your version of Python. I think that there is an option to only install the relevant compiler for Python if you do a custom install of Visual Studio.
Then, after the installation is completed, and you probably had to restart your computer, you should be able to simply run pip install pycrypto and pip will download the source and compile it for you.
If you also installed wheel (pip install wheel) then pip will create a wheel from the source and cache it, which is how I created the wheels that I'm sharing in option 2.
2. Install a Built Distribution from a 3rd party
I happen to be a 3rd party and have shared wheels I compiled for PyCrypto 2.6.1 w/ Python 3.5 for 32 and 64-bit Windows on my GitHub repo. I included installation instructions on the readme of that repo.
If you're looking for a built distribution for an older version of Python and/or PyCrypto, I've also linked to Voidspace on my repo's readme (sorry, I'm limited to 2 links here).
|
How to use async/await in Python 3.5?
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import time
async def foo():
await time.sleep(1)
foo()
I couldn't make this dead simple example to run:
RuntimeWarning: coroutine 'foo' was never awaited foo()
|
Running coroutines requires an event loop. Use the asyncio() library to create one:
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(foo())
loop.close()
Also see the Tasks and Coroutines chapter of the asyncio documentation.
Note however that time.sleep() is not an awaitable object. It returns None so you get an exception after 1 second:
>>> loop.run_until_complete(foo())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python3.5/asyncio/base_events.py", line 342, in run_until_complete
return future.result()
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(value)
File "<stdin>", line 2, in foo
TypeError: object NoneType can't be used in 'await' expression
You should use the asyncio.sleep() coroutine instead:
async def foo():
await asyncio.sleep(1)
|
Are global variables thread safe in flask?
|
In my app the state of a common object is changed by making requests, and the response depends on the state.
class SomeObj():
def __init__(self, param):
self.param = param
def query(self):
self.param += 1
return self.param
global_obj = SomeObj(0)
@app.route('/')
def home():
flash(global_obj.query())
render_template('index.html')
If I run this on my development server, I expect to get 1, 2, 3 and so on. If requests are made from 100 different clients simultaneously, can something go wrong? The expected result would be that the 100 different clients each see a unique number from 1 to 100. Or will something like this happen:
Client 1 queries. self.param is incremented by 1.
Before the return statement can be executed, the thread switches over to client 2. self.param is incremented again.
The thread switches back to client 1, and the client is returned the number 2, say.
Now the thread moves to client 2 and returns him/her the number 3.
Since there were only two clients, the expected results were 1 and 2, not 2 and 3. A number was skipped.
Will this actually happen as I scale up my application? What alternatives to a global variable should I look at?
|
You can't use global variables to hold this sort of data. Not only is it not thread safe, it's not process safe, and WSGI servers in production spawn multiple processes. So not only would your counts be wrong if you were using threads to handle requests, they would also vary depending on which process handled the request.
The development server is single thread, single process by default. You won't see the behavior you describe since each request will be handled synchronously. Enable threads or processes and you will see it. app.run(threaded=True) or app.run(processes=10).
Use a data source outside of Flask to hold global data. A database, memcached, or redis are all appropriate separate storage areas, depending on your needs. You could also use the session for simple data that is per-user.
|
Why does open(True, 'w') print the text like sys.stdout.write?
|
I have the following code:
with open(True, 'w') as f:
f.write('Hello')
Why does this code print the text Hello instead of raise an error?
|
From the built-in function documentation on open():
open(file, mode='r', buffering=-1...
file is either a string or bytes object giving the pathname (absolute or relative to the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped
That "integer file descriptor" is further described in the os module documentation:
For example, standard input is usually file descriptor 0, standard output is 1, and standard error is 2. Further files opened by a process will then be assigned 3, 4, 5, and so forth.
Since booleans are an int subclass, False can be used interchangeably with 0 and True with 1. Therefore, opening a file descriptor of True is the same as opening a file descriptor of 1, which will select standard output.
|
Unable to install nltk on Mac OS El Capitan
|
I did sudo pip install -U nltk as suggested by the nltk documentation.
However, I am getting the following output:
Collecting nltk
Downloading nltk-3.0.5.tar.gz (1.0MB)
100% |ââââââââââââââââââââââââââââââââ| 1.0MB 516kB/s
Collecting six>=1.9.0 (from nltk)
Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six, nltk
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 211, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 311, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 640, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 716, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 125, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 315, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-7dp3on-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
Basically, it is unable to upgrade six from 1.4.1 to 1.9.0. I tried to do that separately but got the same error.
I mention El Capitan because some people are facing problems for other python installation on El Capitan and I am wondering if this is due to that.
Does anyone know how to solve this problem?
|
Here is the way how I fixed the issues:
First, install Xcode CLI:
xcode-select --install
Then reinstall Python:
sudo brew reinstall python
Finally, install nltk:
sudo pip install -U nltk
Hope it helps :)
|
Flask: 'session' vs. 'g'?
|
I'm trying to understand the differences in functionality and purpose between g and session. Both are objects to 'hang' session data on, am I right? If so, what exactly are the differences and which one should I use in what cases?
|
No, g is not an object to hang session data on. g data is not persisted between requests.
session gives you a place to store data per specific browser. As a user of your Flask app, using a specific browser, returns for more requests, the session data is carried over across those requests.
g on the other hand is data shared between different parts of your code base within one request cycle. g can be set up during before_request hooks, is still available during the teardown_request phase and once the request is done and sent out to the client, g is cleared.
|
Methods for writing Parquet files using Python?
|
I'm having trouble finding a library that allows Parquet files to be written using Python. Bonus points if I can use Snappy or a similar compression mechanism in conjunction with it.
Thus far the only method I have found is using Spark with the pyspark.sql.DataFrame Parquet support.
I have some scripts that need to write Parquet files that are not Spark jobs. Is there any approach to writing Parquet files in Python that doesn't involve pyspark.sql?
|
As of 2.2016 there seems to be NO python-only library capable of writing Parquet files.
If you only need to read Parquet files there is python-parquet.
As a workaround you will have to rely on some other process like e.g. pyspark.sql (which uses Py4J and runs on the JVM and can thus not be used directly from your average CPython program).
|
Error installing Pillow on ubuntu 14.04
|
I'm trying to install Pillow on Ubuntu 14.04 using this command:
pip install Pillow
but the installation fails with this error:
ValueError: --enable-jpeg requested but jpeg not found, aborting.
|
The problem was that the package libjpeg-dev was not installed. To solve the problem you should do this:
sudo apt-get install libjpeg-dev
|
In Python, is there an async equivalent to multiprocessing or concurrent.futures?
|
Basically, I'm looking for something that offers a parallel map using python3 coroutines as the backend instead of threads or processes. I believe there should be less overhead when performing highly parallel IO work.
Surely something similar already exists, be it in the standard library or some widely used package?
|
DISCLAIMER PEP 0492 defines only syntax and usage for coroutines. They require an event loop to run, which is most likely asyncio's event loop.
Asynchronous map
I don't know any implementation of map based on coroutines. However it's trivial to implement basic map functionality using asyncio.gather():
def async_map(coroutine_func, iterable):
loop = asyncio.get_event_loop()
future = asyncio.gather(*(coroutine_func(param) for param in iterable))
return loop.run_until_complete(future)
This implementation is really simple. It creates a coroutine for each item in the iterable, joins them into single coroutine and executes joined coroutine on event loop.
Provided implementation covers part of the cases. However it has a problem. With long iterable you would probably want to limit amount of coroutines running in parallel. I can't come up with simple implementation, which is efficient and preserves order at the same time, so I will leave it as an exercise for a reader.
Performance
You claimed:
I believe there should be less overhead when performing highly parallel IO work.
It requires proof, so here is a comparison of multiprocessing implementation, gevent implementation by a p and my implementation based on coroutines. All tests were performed on Python 3.5.
Implementation using multiprocessing:
from multiprocessing import Pool
import time
def async_map(f, iterable):
with Pool(len(iterable)) as p: # run one process per item to measure overhead only
return p.map(f, iterable)
def func(val):
time.sleep(1)
return val * val
Implementation using gevent:
import gevent
from gevent.pool import Group
def async_map(f, iterable):
group = Group()
return group.map(f, iterable)
def func(val):
gevent.sleep(1)
return val * val
Implementation using asyncio:
import asyncio
def async_map(f, iterable):
loop = asyncio.get_event_loop()
future = asyncio.gather(*(f(param) for param in iterable))
return loop.run_until_complete(future)
async def func(val):
await asyncio.sleep(1)
return val * val
Testing program is usual timeit:
$ python3 -m timeit -s 'from perf.map_mp import async_map, func' -n 1 'async_map(func, list(range(10)))'
Results:
Iterable of 10 items:
multiprocessing - 1.05 sec
gevent - 1 sec
asyncio - 1 sec
Iterable of 100 items:
multiprocessing - 1.16 sec
gevent - 1.01 sec
asyncio - 1.01 sec
Iterable of 500 items:
multiprocessing - 2.31 sec
gevent - 1.02 sec
asyncio - 1.03 sec
Iterable of 5000 items:
multiprocessing - failed (spawning 5k processes is not so good idea!)
gevent - 1.12 sec
asyncio - 1.22 sec
Iterable of 50000 items:
gevent - 2.2 sec
asyncio - 3.25 sec
Conclusions
Concurrency based on event loop works faster, when program do mostly I/O, not computations. Keep in mind, that difference will be smaller, when there are less I/O and more computations are involved.
Overhead introduced by spawning processes is significantly bigger, than overhead introduced by event loop based concurrency. It means that your assumption is correct.
Comparing asyncio and gevent we can say, that asyncio has 33-45% bigger overhead. It means that creation of greenlets is cheaper, than creation of coroutines.
As a final conclusion: gevent has better performance, but asyncio is part of the standard library. Difference in performance (absolute numbers) isn't very significant. gevent is quite mature library, while asyncio is relatively new, but it advances quickly.
|
Python pickle error: UnicodeDecodeError
|
I'm trying to do some text classification using Textblob. I'm first training the model and serializing it using pickle as shown below.
import pickle
from textblob.classifiers import NaiveBayesClassifier
with open('sample.csv', 'r') as fp:
cl = NaiveBayesClassifier(fp, format="csv")
f = open('sample_classifier.pickle', 'wb')
pickle.dump(cl, f)
f.close()
And when I try to run this file:
import pickle
f = open('sample_classifier.pickle', encoding="utf8")
cl = pickle.load(f)
f.close()
I get this error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position
0: invalid start byte
Following are the content of my sample.csv:
My SQL is not working correctly at all. This was a wrong choice, SQL
I've issues. Please respond immediately, Support
Where am I going wrong here? Please help.
|
By choosing to open the file in mode wb, you are choosing to write in raw binary. There is no character encoding being applied.
Thus to read this file, you should simply open in mode rb.
|
How to return a specific point after an error in 'while' loop
|
I'm trying to write a program that include a while loop, in this loop I have an error message if something goes wrong. It's kinda like this;
while True:
questionx = input("....")
if x =="SomethingWrongabout questionX":
print ("Something went wrong.")
continue
other codes...
questiony = input("....")
if y == "SomethingWrongabout questionY":
print ("Something went wrong.")
continue
other codes...
questionz = input("....")
if z == "SomethingWrongabout questionZ":
print ("Something went wrong.")
continue
other codes..
The problem is as follows: when an error occurs after questionX, the program goes to beginning. It starts from the beginning, not from y or z. But at x there is no problem, so that, the program should start asking questions from y or z because, the problem occurred at y or z.
How can I make the program start from a specific point, like if there is an error only at yquestion, program must start asking questions from y or if only at z,program must start from z, not beginning-not x.
Should I use more than one while loop for this or is there anything that makes this work only in one loop?
|
[EDIT from generator to function]
You can try a function:
def check_answer(question, answer):
while True:
current_answer = input(question)
if current_answer == answer:
break
print "Something wrong with question {}".format(question)
return current_answer
answerX = check_answer("Question about X?\n", "TrueX")
answerY = check_answer("Question about Y?\n", "TrueY")
answerZ = check_answer("Question about Z?\n", "TrueZ")
Not sure if you want to keep the values, but if you need to tweak it, this should give you hints.
Results:
Question about X?
"blah"
Something wrong with question Question about X?
Question about X?
"blah"
Something wrong with question Question about X?
Question about X?
"TrueX"
Question about Y?
"TrueY"
Question about Z?
"blah"
Something wrong with question Question about Z?
Question about Z?
"blah"
Something wrong with question Question about Z?
Question about Z?
"TrueZ"
Edit per comment:
def check_answer(question, answers):
while True:
current_answer = input(question)
if current_answer in answers:
break
print "Something wrong with question {}".format(question)
return current_answer
answerX = check_answer("Question about X?\n", ("TrueX", "TrueY")
|
How do I run psycopg2 on El Capitan without hitting a libssl error
|
I've got a python django dev setup on my mac and have just upgraded to El Capitan.
I've got psycopg2 installed in a virtualenv but when I run my server I get the following error -
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: dlopen(/Users/aidan/Environments/supernova/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: libssl.1.0.0.dylib
Referenced from: /Users/aidan/Environments/supernova/lib/python2.7/site-packages/psycopg2/_psycopg.so
Reason: image not found
I've tried reinstalling my virtualenv
pip install -f requirements.txt
And I've tried upgrading psycopg2
pip uninstall psycopg2
pip install psycopg2
But I'm still getting the same error.
I've also tried adding symlinks to /usr/lib but El Capitan's new rootless thing doesn't allow it -
$ sudo ln -s /Library/PostgreSQL/9.2/lib/libssl.1.0.0.dylib
/usr/lib
ln: /usr/lib/libssl.1.0.0.dylib: Operation not permitted
So I tried /usr/local to no avail.
The system version of openssl seems to be 1.0.2 -
$ openssl version
OpenSSL 1.0.2d 9 Jul 2015
How do I fix this?
|
I tried the following:
I have brew installed on my machine. Running $ brew doctor gave me a suggestion to do the following:
$ sudo chown -R $(whoami):admin /usr/local
Once this was done, I re-installed psycopg2 and performed the following:
$ sudo ln -s /Library/PostgreSQL/9.3/lib/libssl.1.0.0.dylib /usr/local/lib/
$ sudo ln -s /Library/PostgreSQL/9.3/lib/libcrypto.1.0.0.dylib /usr/local/lib/
Please note the version of your PostgreSQL and the path /usr/local/lib.
Doing this got me back to a working setup again.
P.S.: The brew suggested step might not be relevant here but I put this up because you were seeing permission issues. You could also disable rootless mode.
|
Are constant computations cached in Python?
|
Say I have a function in Python that uses a constant computed float value like 1/3.
def div_by_3(x):
return x * (1/3)
If I call the function repeatedly, will the value of 1/3 be automatically cached for efficiency? Or do I have to do something manually such as the following?
def div_by_3(x, _ONE_THIRD=1/3):
return x * _ONE_THIRD
|
Find out for yourself! The dis module is great for inspecting this sort of stuff:
>>> from dis import dis
>>> def div_by_3(x):
... return x * (1/3.)
...
>>> dis(div_by_3)
2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (1)
6 LOAD_CONST 2 (3.0)
9 BINARY_DIVIDE
10 BINARY_MULTIPLY
11 RETURN_VALUE
As you can see, the 1/3 calculation happens every time. (Note: I changed 3 to 3. to force float division, otherwise it'd just be 0. You can also enable future-division, which actually changed the behavior, see edit section below).
And your second approach:
>>> def db3(x, _ONE_THIRD=1/3.):
... return x * _ONE_THIRD
...
>>> dis(db3)
2 0 LOAD_FAST 0 (x)
3 LOAD_FAST 1 (_ONE_THIRD)
6 BINARY_MULTIPLY
7 RETURN_VALUE
More information on the second can be found by inspecting the function object:
>>> inspect.getargspec(db3)
ArgSpec(args=['x', '_ONE_THIRD'], varargs=None, keywords=None, defaults=(0.3333333333333333,))
You can see the default value is cached in there.
EDIT: Turns out this is a little more interesting -- in Python 3 they do get cached (and also in Python 2.7 when you enable from __future__ import division):
>>> dis.dis(div_by_3)
2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 3 (0.3333333333333333)
6 BINARY_MULTIPLY
7 RETURN_VALUE
Switching to integer division (//) in either Python 3 or 2.7-with-future-division doesn't change this, it just alters the constant to be a 0 instead of 0.333.. Also, using integer division directly in 2.7 without future-division will cache the 0 as well.
Learned something new today!
|
Cassandra cqlsh "unable to connect to any servers"
|
I get the following message when executing cqlsh.bat on the command line
Connection error: ('Unable to connect to any servers', {'127.0.0.1': ProtocolError("cql_version '3.3.0' is not supported by remote (w/ native protocol). Supported versions: [u'3.2.0']",)})
I'm running Python version 2.7.10 along with Cassandra version 2.2.1. Not sure if it's related but when I start the Cassandra server I need to run "Set-ExecutionPolicy Unrestricted" on PowerShell or else it doesn't work.
|
You can force cqlsh to use a specific cql version using the flag
--cqlversion="#.#.#"
Example cqlsh usage (and key/values):
cqlsh 12.34.56.78 1234 -u username -p password --cqlversion="3.2.0"
cqlsh (IP ADDR) (PORT) (DB_USERN) (DB_PASS) (VER)
|
How can i count occurrence of each word in document using Dictionary comprehension
|
I have a list of lists in python full of texts. It is like set words from each document. So for every document i have a list and then on list for all documents.
All the list contains only unique words. My purpose is to count occurrence of each word in the complete document. I am able to do this successfully using the below code:
for x in texts_list:
for l in x:
if l in term_appearance:
term_appearance[l] += 1
else:
term_appearance[l] = 1
But I want to use dictionary comprehension to do the same. This is the first time, I am trying to write dictionary comprehension and using previous existing posts in stackoverflow, I have been able to write the following:
from collections import defaultdict
term_appearance = defaultdict(int)
{{term_appearance[l] : term_appearance[l] + 1 if l else term_appearance[l] : 1 for l in x} for x in texts_list}
Previous post for reference:
Simple syntax error in Python if else dict comprehension
As suggested in above post, I have also used the following code:
{{l : term_appearance[l] + 1 if l else 1 for l in x} for x in texts_list}
The above code was successful in producing empty lists but ultimately threw the following traceback :
[]
[]
[]
[]
Traceback (most recent call last):
File "term_count_fltr.py", line 28, in <module>
{{l : term_appearance[l] + 1 if l else 1 for l in x} for x in texts_list}
File "term_count_fltr.py", line 28, in <setcomp>
{{l : term_appearance[l] + 1 if l else 1 for l in x} for x in texts_list}
TypeError: unhashable type: 'dict'
Any help in improving my current understanding would be much appreciated.
Looking at the above error, I also tried
[{l : term_appearance[l] + 1 if l else 1 for l in x} for x in texts_list]
This ran without any error but the output was empty lists only.
|
Like explained in the other answers, the issue is that dictionary comprehension creates a new dictionary, so you don't get reference to that new dictionary until after it has been created. You cannot do dictionary comprehension for what you are doing.
Given that, what you are doing is trying to re-implement what is already done by collections.Counter . You could simply use Counter . Example -
from collections import Counter
term_appearance = Counter()
for x in texts_list:
term_appearance.update(x)
Demo -
>>> l = [[1,2,3],[2,3,1],[5,4,2],[1,1,3]]
>>> from collections import Counter
>>> term_appearance = Counter()
>>> for x in l:
... term_appearance.update(x)
...
>>> term_appearance
Counter({1: 4, 2: 3, 3: 3, 4: 1, 5: 1})
If you really want to do this in some kind of comprehension, you can do:
from collections import Counter
term_appearance = Counter()
[term_appearance.update(x) for x in texts_list]
Demo -
>>> l = [[1,2,3],[2,3,1],[5,4,2],[1,1,3]]
>>> from collections import Counter
>>> term_appearance = Counter()
>>> [term_appearance.update(x) for x in l]
[None, None, None, None]
>>> term_appearance
Counter({1: 4, 2: 3, 3: 3, 4: 1, 5: 1})
The output [None, None, None, None] is from the list comprehension resulting in that list (because this was run interactively), if you run this in a script as python <script>, that output would simply be discarded.
You can also use itertools.chain.from_iterable() to create a flattened list from your text_lists and then use that for Counter. Example:
from collections import Counter
from itertools import chain
term_appearance = Counter(chain.from_iterable(texts_list))
Demo -
>>> from collections import Counter
>>> from itertools import chain
>>> term_appearance = Counter(chain.from_iterable(l))
>>> term_appearance
Counter({1: 4, 2: 3, 3: 3, 4: 1, 5: 1})
Also, another issue in your original code in line -
{{term_appearance[l] : term_appearance[l] + 1 if l else term_appearance[l] : 1 for l in x} for x in texts_list}
This is actually a set comprehension with a dictionary comprehension nested inside.
This is the reason you are getting the error - TypeError: unhashable type: 'dict' . Because after first running the dictionary comprehension and creating a dict , it is trying to add that into the set . But dictionaries are not hashable, hence the error.
|
built-in max heap API in Python
|
Default heapq is min queue implementation and wondering if there is an option for max queue? Thanks.
I tried the solution using _heapify_max for max heap, but how to handle dynamically push/pop element? It seems _heapify_max could only be used during initialization time.
import heapq
def heapsort(iterable):
h = []
for value in iterable:
heapq.heappush(h, value)
return [heapq.heappop(h) for i in range(len(h))]
if __name__ == "__main__":
print heapsort([1, 3, 5, 7, 9, 2, 4, 6, 8, 0])
Edit, tried _heapify_max seems not working for dynamically push/pop elements. I tried both methods output the same, both output is, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9].
def heapsort(iterable):
h = []
for value in iterable:
heapq.heappush(h, value)
return [heapq.heappop(h) for i in range(len(h))]
def heapsort2(iterable):
h = []
heapq._heapify_max(h)
for value in iterable:
heapq.heappush(h, value)
return [heapq.heappop(h) for i in range(len(h))]
if __name__ == "__main__":
print heapsort([1, 3, 5, 7, 9, 2, 4, 6, 8, 0])
print heapsort2([1, 3, 5, 7, 9, 2, 4, 6, 8, 0])
Thanks in advance,
Lin
|
In the past I have simply used sortedcontainers's SortedList for this, as:
> a = SortedList()
> a.add(3)
> a.add(2)
> a.add(1)
> a.pop()
3
It's not a heap, but it's fast and works directly as required.
If you absolutely need it to be a heap, you could make a general negation class to hold your items.
class Neg():
def __init__(self, x):
self.x = x
def __cmp__(self, other):
return -cmp(self.x, other.x)
def maxheappush(heap, item):
heapq.heappush(heap, Neg(item))
def maxheappop(heap):
return heapq.heappop(heap).x
But that will be using a little more memory.
|
Confusing about a Python min quiz
|
Just now I saw a quiz on this page:
>>> x, y = ???
>>> min(x, y) == min(y, x)
False
The example answer is
x, y = {0}, {1}
From the documentation I know that:
min(iterable[, key=func]) -> value
min(a, b, c, ...[, key=func]) -> value
With a single iterable argument, return its smallest item.
With two or more arguments, return the smallest argument.
But why is min({0},{1})={0} and min({1},{0})={1}?
I also tried a few others:
min({0,2},1) # 1
min(1,{0,2}) # 1
min({1},[2,3]) # [2,3]
min([2,3],1) # 1
|
The comparison operators <, <=, >=, and > check whether one set is a strict subset, subset, superset, or strict superset of another, respectively.
{0} and {1} are False for all of these, so the result depends on the check order and operator.
|
Accessing attributes on literals work on all types, but not `int`; why?
|
I have read that everything in python is an object, and as such I started to experiment with different types and invoking __str__ on them — at first I was feeling really excited, but then I got confused.
>>> "hello world".__str__()
'hello world'
>>> [].__str__()
'[]'
>>> 3.14.__str__()
'3.14'
>>> 3..__str__()
'3.0'
>>> 123.__str__()
File "<stdin>", line 1
123.__str__()
^
SyntaxError: invalid syntax
Why does something.__str__() work for "everything" besides int?
Is 123 not an object of type int?
|
You need parens:
(4).__str__()
The problem is the lexer thinks "4." is going to be a floating-point number.
Also, this works:
x = 4
x.__str__()
|
python 3.5: TypeError: a bytes-like object is required, not 'str'
|
I've very recently migrated to Py 3.5.
This code was working properly in Python 2.7:
with open(fname, 'rb') as f:
lines = [x.strip() for x in f.readlines()]
for line in lines:
tmp = line.strip().lower()
if 'some-pattern' in tmp: continue
# ... code
After upgrading to 3.5, I'm getting the:
TypeError: a bytes-like object is required, not 'str'
error on the last line (the pattern search code).
I've tried using the .decode() function on either side of the statement, also tried:
if tmp.find('some-pattern') != -1: continue
- to no avail.
I was able to resolve almost all 2:3 issues quickly, but this little statement is bugging me.
|
You opened the file in binary mode:
with open(fname, 'rb') as f:
This means that all data read from the file is returned as bytes objects, not str. You cannot then use a string in a containment test:
if 'some-pattern' in tmp: continue
You'd have to use a bytes object to test against tmp instead:
if b'some-pattern' in tmp: continue
or open the file as a textfile instead by replacing the 'rb' mode with 'r'.
|
Python Requests/urllib â monitoring bandwidth usage
|
I want to log the total bytes downloaded and uploaded by my Python script.
total_downloaded_bytes = 0
def bandwidth_hook(r, *args, **kwargs):
global total_downloaded_bytes
total_downloaded_bytes += len(r.content)
req = requests.session()
req.hooks = {'response': bandwidth_hook}
The above code doesn't take into account HTTP compression (if I'm right) and the size of headers.
Is there a way to count total uploaded and downloaded bytes from a requests.session? If not, what about a script-wide count?
|
You can access the r.request object to calculate outgoing bytes, and you can determine incoming bytes (compressed or not) by looking at the content-length header for the incoming request. This should suffice for 99% of all requests you normally would make.
Calculating the byte size of headers is easy enough; just add up key and value lenghts, add 4 bytes for the colon and whitespace, plus 2 more for the blank line:
def header_size(headers):
return sum(len(key) + len(value) + 4 for key, value in headers.items()) + 2
There is also the initial line; that's {method} {path_url} HTTP/1.1{CRLF} for requests, and HTTP/1.x {status_code} {reason}{CRLF} for the response. Those lengths are all also available to you.
Total size then is:
request_line_size = len(r.request.method) + len(r.request.path_url) + 12
request_size = request_line_size + header_size(r.request.headers) + int(r.request.headers.get('content-length', 0))
response_line_size = len(r.response.reason) + 15
response_size = response_line_size + header_size(r.headers) + int(r.headers.get('content-length', 0))
total_size = request_size + response_size
|
Why 2700 records (320KB each) should take 30 seconds to be fetched?
|
I have 2700 records in MongoDB. Each document has a size of approximately 320KB. The engine I use is wiredTiger and the total size of collection is about 885MB.
My MongoDB config is as below:
systemLog:
destination: file
path: /usr/local/var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /usr/local/var/mongodb
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
statisticsLogDelaySecs: 0
journalCompressor: snappy
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: false
net:
bindIp: 127.0.0.1
My connection is via socket:
mongo_client = MongoClient('/tmp/mongodb-27017.sock')
And collection stats reveal this result:
db.mycol.stats()
{
"ns" : "bi.mycol",
"count" : 2776,
"size" : 885388544,
"avgObjSize" : 318944,
"storageSize" : 972476416,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=0,checkpoint=(WiredTigerCheckpoint.9=(addr=\"01e30275da81e4b9e99f78e30275db81e4c61d1e01e30275dc81e40fab67d5808080e439f6afc0e41e80bfc0\",order=9,time=1444566832,size=511762432,write_gen=13289)),checkpoint_lsn=(24,52054144),checksum=uncompressed,collator=,columns=,dictionary=0,format=btree,huffman_key=,huffman_value=,id=5,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=1MB,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,value_format=u,version=(major=1,minor=1)",
"type" : "file",
"uri" : "statistics:table:collection-0-6630292038312816605",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
"file allocation unit size" : 4096,
"blocks allocated" : 0,
"checkpoint size" : 511762432,
"allocations requiring file extension" : 0,
"blocks freed" : 0,
"file magic number" : 120897,
"file major version number" : 1,
"minor version number" : 0,
"file bytes available for reuse" : 460734464,
"file size in bytes" : 972476416
},
"btree" : {
"column-store variable-size deleted values" : 0,
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size leaf pages" : 0,
"pages rewritten by compaction" : 0,
"number of key/value pairs" : 0,
"fixed-record size" : 0,
"maximum tree depth" : 4,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 3276,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 1048576,
"overflow pages" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes read into cache" : 3351066029,
"bytes written from cache" : 0,
"checkpoint blocked page eviction" : 0,
"unmodified pages evicted" : 8039,
"page split during eviction deepened the tree" : 0,
"modified pages evicted" : 0,
"data source pages selected for eviction unable to be evicted" : 1,
"hazard pointer blocked page eviction" : 1,
"internal pages evicted" : 0,
"pages split during eviction" : 0,
"in-memory page splits" : 0,
"overflow values cached in memory" : 0,
"pages read into cache" : 10519,
"overflow pages read into cache" : 0,
"pages written from cache" : 0
},
"compression" : {
"raw compression call failed, no additional data available" : 0,
"raw compression call failed, additional data available" : 0,
"raw compression call succeeded" : 0,
"compressed pages read" : 10505,
"compressed pages written" : 0,
"page written failed to compress" : 0,
"page written was too small to compress" : 0
},
"cursor" : {
"create calls" : 7,
"insert calls" : 0,
"bulk-loaded cursor-insert calls" : 0,
"cursor-insert key and value bytes inserted" : 0,
"next calls" : 0,
"prev calls" : 2777,
"remove calls" : 0,
"cursor-remove key bytes removed" : 0,
"reset calls" : 16657,
"search calls" : 16656,
"search near calls" : 0,
"update calls" : 0,
"cursor-update value bytes updated" : 0
},
"reconciliation" : {
"dictionary matches" : 0,
"internal page multi-block writes" : 0,
"leaf page multi-block writes" : 0,
"maximum blocks required for a page" : 0,
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"overflow values written" : 0,
"pages deleted" : 0,
"page checksum matches" : 0,
"page reconciliation calls" : 0,
"page reconciliation calls for eviction" : 0,
"leaf page key bytes discarded using prefix compression" : 0,
"internal page key bytes discarded using suffix compression" : 0
},
"session" : {
"object compaction" : 0,
"open cursor count" : 7
},
"transaction" : {
"update conflicts" : 0
}
},
"nindexes" : 2,
"totalIndexSize" : 208896,
"indexSizes" : {
"_id_" : 143360,
"date_1" : 65536
},
"ok" : 1
}
How can I understand that MongoDB uses swap? How to infer where exactly is the bottleneck?
EDIT:
The way I fetch data in python is:
for doc in mycol.find({'date': {"$lte": '2016-12-12', '$gte': '2012-09-09'}}, {'_id': False}):
doc['uids'] = set(doc['uids'])
records.append(doc)
date field is indexed.
EDIT 2:
These are the result when fetching data:
CPU core1: ~65%
CPU core2: ~65%
CPU core3: ~65%
CPU core4: ~65%
RAM: 7190/8190MB
swap: 1140/2048MB
EDIT 3:
MongoDB log is:
2015-10-11T17:25:08.317+0330 I NETWORK [initandlisten] connection accepted from anonymous unix socket #18 (2 connections now open)
2015-10-11T17:25:08.321+0330 I NETWORK [initandlisten] connection accepted from anonymous unix socket #19 (3 connections now open)
2015-10-11T17:25:36.501+0330 I QUERY [conn19] getmore bi.mycol cursorid:10267473126 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:3 nreturned:14 reslen:4464998 locks:{} 199ms
2015-10-11T17:25:37.665+0330 I QUERY [conn19] getmore bi.mycol cursorid:10267473126 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:5 nreturned:14 reslen:4464998 locks:{} 281ms
2015-10-11T17:25:50.331+0330 I NETWORK [conn19] end connection anonymous unix socket (2 connections now open)
2015-10-11T17:25:50.363+0330 I NETWORK [conn18] end connection anonymous unix socket (1 connection now open)
EDIT 4:
Sample data is:
{"date": "2012-09-12", "uids": [1,2,3,4,...,30000]}
NB: I have 30k UIDs inside of uids field.
EDIT 5:
Explaining query display that it has used IXSCAN stage:
$ db.mycol.find({'date': {"$lte": '2018-11-27', '$gte': '2011-04-23'}}, {'_id': 0}).explain("executionStats")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "bi.mycol",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"date" : {
"$lte" : "2018-11-27"
}
},
{
"date" : {
"$gte" : "2011-04-23"
}
}
]
},
"winningPlan" : {
"stage" : "PROJECTION",
"transformBy" : {
"_id" : 0
},
"inputStage" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"date" : 1
},
"indexName" : "date_1",
"isMultiKey" : false,
"direction" : "forward",
"indexBounds" : {
"date" : [
"[\"2011-04-23\", \"2018-11-27\"]"
]
}
}
}
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 2776,
"executionTimeMillis" : 2312,
"totalKeysExamined" : 2776,
"totalDocsExamined" : 2776,
"executionStages" : {
"stage" : "PROJECTION",
"nReturned" : 2776,
"executionTimeMillisEstimate" : 540,
"works" : 2777,
"advanced" : 2776,
"needTime" : 0,
"needFetch" : 0,
"saveState" : 31,
"restoreState" : 31,
"isEOF" : 1,
"invalidates" : 0,
"transformBy" : {
"_id" : 0
},
"inputStage" : {
"stage" : "FETCH",
"nReturned" : 2776,
"executionTimeMillisEstimate" : 470,
"works" : 2777,
"advanced" : 2776,
"needTime" : 0,
"needFetch" : 0,
"saveState" : 31,
"restoreState" : 31,
"isEOF" : 1,
"invalidates" : 0,
"docsExamined" : 2776,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 2776,
"executionTimeMillisEstimate" : 0,
"works" : 2776,
"advanced" : 2776,
"needTime" : 0,
"needFetch" : 0,
"saveState" : 31,
"restoreState" : 31,
"isEOF" : 1,
"invalidates" : 0,
"keyPattern" : {
"date" : 1
},
"indexName" : "date_1",
"isMultiKey" : false,
"direction" : "forward",
"indexBounds" : {
"date" : [
"[\"2011-04-23\", \"2018-11-27\"]"
]
},
"keysExamined" : 2776,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0,
"matchTested" : 0
}
}
}
},
"serverInfo" : {
"host" : "MySys.local",
"port" : 27017,
"version" : "3.0.0",
"gitVersion" : "nogitversion"
},
"ok" : 1
}
EDIT 6:
OS: Mac osX Yosemite
MongoDB version: 3.0.0
Total RAM: 8G
Filesystem: Mac OS Extended (Journaled)
|
The methods I used to improve performance:
First of all instead of using for loop to traverse query and fetch data, I give the cursor to Pandas rather than creating a large list object in python:
cursor = mycol.find({'date': {"$lte": end_date, '$gte': start_date}}, {'_id': False})
df = pandas.DataFrame(list(cursor))
Performance got much better it now takes 10 seconds at most rather than 30 seconds.
Instead of using doc['uids'] = set(doc['uids']) which take around 6 seconds I didn't change default list to set and handled duplicates with dataframe itself.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.