instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Boto3, python and how to handle errors
I just picked up python as my go-to scripting language and I am trying to figure how to do proper error handling with boto3. I am trying to create an IAM user: def create_user(username, iam_conn): try: user = iam_conn.create_user(UserName=username) return user except Exception as e: return e When the call to create_user succeeds, i get a neat object that contains the http status code of the API call and the data of the newly created user. Example: {'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'omitted' }, u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted', u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()), u'Path': '/', u'UserId': 'omitted', u'UserName': 'omitted' } } This works great. But when this fails (like if the user already exists), i just get an object of type botocore.exceptions.ClientError with only text to tell me what went wrong. Example: ClientError('An error occurred (EntityAlreadyExists) when calling the CreateUser operation: User with name omitted already exists.',) This (AFAIK) makes error handling very hard because i can't just switch on the resulting http status code (409 for user already exists according to the AWS API docs for IAM). This makes me think that i must be doing something the wrong way. The optimal way would be for boto3 to never throw exceptions, but juts always return an object that reflects how the API call went. Can anyone enlighten me on this issue or point me in the right direction? Thanks a lot!
Use the response contained within the exception. Here is an example: import boto3 import botocore try: iam = boto3.client('iam') user = iam.create_user(UserName='fred') print "Created user: %s" % user except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] == 'EntityAlreadyExists': print "User already exists" else: print "Unexpected error: %s" % e The response dict in the exception will contain the following: ['Error']['Code'] e.g. 'EntityAlreadyExists' or 'ValidationError' ['ResponseMetadata']['HTTPStatusCode'] e.g. 400 ['ResponseMetadata']['RequestId'] e.g. 'd2b06652-88d7-11e5-99d0-812348583a35' ['Error']['Message'] e.g. "An error occurred (EntityAlreadyExists) ..." ['Error']['Type'] e.g. 'Sender'
Fast linear interpolation in Numpy / Scipy "along a path"
Let's say that I have data from weather stations at 3 (known) altitudes on a mountain. Specifically, each station records a temperature measurement at its location every minute. I have two kinds of interpolation I'd like to perform. And I'd like to be able to perform each quickly. So let's set up some data: import numpy as np from scipy.interpolate import interp1d import pandas as pd import seaborn as sns np.random.seed(0) N, sigma = 1000., 5 basetemps = 70 + (np.random.randn(N) * sigma) midtemps = 50 + (np.random.randn(N) * sigma) toptemps = 40 + (np.random.randn(N) * sigma) alltemps = np.array([basetemps, midtemps, toptemps]).T # note transpose! trend = np.sin(4 / N * np.arange(N)) * 30 trend = trend[:, np.newaxis] altitudes = np.array([500, 1500, 4000]).astype(float) finaltemps = pd.DataFrame(alltemps + trend, columns=altitudes) finaltemps.index.names, finaltemps.columns.names = ['Time'], ['Altitude'] finaltemps.plot() Great, so our temperatures look like this: Interpolate all times to for the same altitude: I think this one is pretty straightforward. Say I want to get the temperature at an altitude of 1,000 for each time. I can just use built in scipy interpolation methods: interping_function = interp1d(altitudes, finaltemps.values) interped_to_1000 = interping_function(1000) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) finaltemps.plot(ax=ax, alpha=0.15) ax.plot(interped_to_1000, label='Interped') ax.legend(loc='best', title=finaltemps.columns.name) This works nicely. And let's see about speed: %%timeit res = interp1d(altitudes, finaltemps.values)(1000) #-> 1000 loops, best of 3: 207 µs per loop Interpolate "along a path": So now I have a second, related problem. Say I know the altitude of a hiking party as a function of time, and I want to compute the temperature at their (moving) location by linearly interpolating my data through time. In particular, the times at which I know the location of the hiking party are the same times at which I know the temperatures at my weather stations. I can do this without too much effort: location = np.linspace(altitudes[0], altitudes[-1], N) interped_along_path = np.array([interp1d(altitudes, finaltemps.values[i, :])(loc) for i, loc in enumerate(location)]) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) finaltemps.plot(ax=ax, alpha=0.15) ax.plot(interped_along_path, label='Interped') ax.legend(loc='best', title=finaltemps.columns.name) So this works really nicely, but its important to note that the key line above is using list comprehension to hide an enormous amount of work. In the previous case, scipy is creating a single interpolation function for us, and evaluating it once on a large amount of data. In this case, scipy is actually constructing N individual interpolating functions and evaluating each once on a small amount of data. This feels inherently inefficient. There is a for loop lurking here (in the list comprehension) and moreover, this just feels flabby. Not surprisingly, this is much slower than the previous case: %%timeit res = np.array([interp1d(altitudes, finaltemps.values[i, :])(loc) for i, loc in enumerate(location)]) #-> 10 loops, best of 3: 145 ms per loop So the second example runs 1,000 slower than the first. I.e. consistent with the idea that the heavy lifting is the "make a linear interpolation function" step...which is happening 1,000 times in the second example but only once in the first. So, the question: is there a better way to approach the second problem? For example, is there a good way to set it up with 2-dimensinoal interpolation (which could perhaps handle the case where the times at which the hiking party locations are known are not the times at which the temperatures have been sampled)? Or is there a particularly slick way to handle things here where the times do line up? Or other?
A linear interpolation between two values y1, y2 at locations x1 and x2, with respect to point xi is simply: yi = y1 + (y2-y1) * (xi-x1) / (x2-x1) With some vectorized Numpy expressions we can select the relevant points from the dataset and apply the above function: I = np.searchsorted(altitudes, location) x1 = altitudes[I-1] x2 = altitudes[I] time = np.arange(len(alltemps)) y1 = alltemps[time,I-1] y2 = alltemps[time,I] xI = location yI = y1 + (y2-y1) * (xI-x1) / (x2-x1) The trouble is that some points lie on the boundaries of (or even outside of) the known range, which should be taken into account: I = np.searchsorted(altitudes, location) same = (location == altitudes.take(I, mode='clip')) out_of_range = ~same & ((I == 0) | (I == altitudes.size)) I[out_of_range] = 1 # Prevent index-errors x1 = altitudes[I-1] x2 = altitudes[I] time = np.arange(len(alltemps)) y1 = alltemps[time,I-1] y2 = alltemps[time,I] xI = location yI = y1 + (y2-y1) * (xI-x1) / (x2-x1) yI[out_of_range] = np.nan Luckily, Scipy already provides ND interpolation, which also just as easy takes care of the mismatching times, for example: from scipy.interpolate import interpn time = np.arange(len(alltemps)) M = 150 hiketime = np.linspace(time[0], time[-1], M) location = np.linspace(altitudes[0], altitudes[-1], M) xI = np.column_stack((hiketime, location)) yI = interpn((time, altitudes), alltemps, xI) Here's a benchmark code (without any pandas actually, bit I did include the solution from the other answer): import numpy as np from scipy.interpolate import interp1d, interpn def original(): return np.array([interp1d(altitudes, alltemps[i, :])(loc) for i, loc in enumerate(location)]) def OP_self_answer(): return np.diagonal(interp1d(altitudes, alltemps)(location)) def interp_checked(): I = np.searchsorted(altitudes, location) same = (location == altitudes.take(I, mode='clip')) out_of_range = ~same & ((I == 0) | (I == altitudes.size)) I[out_of_range] = 1 # Prevent index-errors x1 = altitudes[I-1] x2 = altitudes[I] time = np.arange(len(alltemps)) y1 = alltemps[time,I-1] y2 = alltemps[time,I] xI = location yI = y1 + (y2-y1) * (xI-x1) / (x2-x1) yI[out_of_range] = np.nan return yI def scipy_interpn(): time = np.arange(len(alltemps)) xI = np.column_stack((time, location)) yI = interpn((time, altitudes), alltemps, xI) return yI N, sigma = 1000., 5 basetemps = 70 + (np.random.randn(N) * sigma) midtemps = 50 + (np.random.randn(N) * sigma) toptemps = 40 + (np.random.randn(N) * sigma) trend = np.sin(4 / N * np.arange(N)) * 30 trend = trend[:, np.newaxis] alltemps = np.array([basetemps, midtemps, toptemps]).T + trend altitudes = np.array([500, 1500, 4000], dtype=float) location = np.linspace(altitudes[0], altitudes[-1], N) funcs = [original, interp_checked, scipy_interpn] for func in funcs: print(func.func_name) %timeit func() from itertools import combinations outs = [func() for func in funcs] print('Output allclose:') print([np.allclose(out1, out2) for out1, out2 in combinations(outs, 2)]) With the following result on my system: original 10 loops, best of 3: 184 ms per loop OP_self_answer 10 loops, best of 3: 89.3 ms per loop interp_checked 1000 loops, best of 3: 224 µs per loop scipy_interpn 1000 loops, best of 3: 1.36 ms per loop Output allclose: [True, True, True, True, True, True] Scipy's interpn suffers somewhat in terms of speed compared to the very fastest method, but for it's generality and ease of use it's definitely the way to go.
How does this Python 3 quine work?
Found this example of quine: s='s=%r;print(s%%s)';print(s%s) I get that %s and %r do the str and repr functions, as pointed here, but what exactly means the s%s part and how the quine works?
s is set to: 's=%r;print(s%%s)' so the %r gets replaced by exactly that (keeping the single quotes) in s%s and the final %% with a single %, giving: s='s=%r;print(s%%s)';print(s%s) and hence the quine.
Losslessly compressing images on django
I'm doing optimization and Google recommends Lossless compression to images, looking for a way to implement this in Django. Here's the images they specified, I think for it to be done effectively it needs to implemented systemwide possibly using a middleware class wondering if anyone has done this before. Here's the link to google analytics for pagespeed https://developers.google.com/speed/pagespeed/insights/?url=www.kenyabuzz.com Optimize images Properly formatting and compressing images can save many bytes of data. Optimize the following images to reduce their size by 627.3KiB (74% reduction). Losslessly compressing http://www.kenyabuzz.com/media/uploads/clients/kenya_buzz_2.jpg could save 594.3KiB (92% reduction). Losslessly compressing http://www.kenyabuzz.com/media/uploads/clients/new_tribe_2.jpg could save 25KiB (44% reduction). Losslessly compressing http://www.kenyabuzz.com/…a/uploads/clients/EthiopianAirlines2.jpg could save 3KiB (22% reduction). Losslessly compressing http://www.kenyabuzz.com/static/kb/images/Nightlife.Homepage.jpg could save 1.3KiB (2% reduction). Losslessly compressing http://www.kenyabuzz.com/static/kb/img/social/blog.png could save 1.1KiB (43% reduction). Losslessly compressing http://www.kenyabuzz.com/static/kb/img/social/twitter.png could save 969B (52% reduction). Losslessly compressing http://www.kenyabuzz.com/…der-Board---Email-Signature--Neutral.jpg could save 920B (2% reduction). Losslessly compressing http://www.kenyabuzz.com/static/kb/img/social/youtube.png could save 757B (31% reduction).
Losslessly compressing http://www.kenyabuzz.com/media/uploads/clients/kenya_buzz_2.jpg could save 594.3KiB (92% reduction). First of all, the information in the logs is rather misleading because it is impossible to compress images by 92% using a lossless format (except for some cases like single-colour images, basic geometric shapes like squares, etc). Read this answer and this answer for more info. Really, do read them, both are excellent answers. Second, you can use lossy compression formats "without losing quality" – the differences are so subtle, human eye doesn't even notice. So, I downloaded an image from the website you're optimizing from this link: http://www.kenyabuzz.com/media/uploads/clients/kenya_buzz_2.jpg I opened my Python console and wrote this: >>> from PIL import Image >>> # Open the image >>> im = Image.open("kenya_buzz_2.jpg") >>> # Now save it >>> im.save("kenya_buzz_compressed.jpg", format="JPEG", quality=70) This created a new image on my disk. Below are both the images. Original (655.3kB) Compressed (22.4kB ~96% reduction @ quality=70) You can play around with the quality option. Like, value of 80 will give you a better quality image but with a little larger size.
Mapping dictionary value to list
Given the following dictionary: dct = {'a':3, 'b':3,'c':5,'d':3} How can I apply these values to a list such as: lst = ['c', 'd', 'a', 'b', 'd'] in order to get something like: lstval = [5, 3, 3, 3, 3]
Using map: >>> map(dct.get, lst) [5, 3, 3, 3, 3] Using a list comprehension: >>> [dct[k] for k in lst] [5, 3, 3, 3, 3]
HTTPError: HTTP Error 503: Service Unavailable goslate language detection request : Python
I have just started using the goslate library in python to detect the language of the words in a text but after testing it for 7-8 inputs, I gave the input which had the words written in two languages arabic and english. After which, it started giving me the error. Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> execfile("C:/test_goslate.py"); File "C:/test_goslate.py", line 12, in <module> language_id = gs.detect('الدولة') File "C:\Python27\lib\site-packages\goslate.py", line 484, in detect return self._detect_language(text) File "C:\Python27\lib\site-packages\goslate.py", line 448, in _detect_language return self._basic_translate(text[:50].encode('utf-8'), 'en', 'auto')[1] File "C:\Python27\lib\site-packages\goslate.py", line 251, in _basic_translate response_content = self._open_url(url) File "C:\Python27\lib\site-packages\goslate.py", line 181, in _open_url response = self._opener.open(request, timeout=self._TIMEOUT) File "C:\Python27\lib\urllib2.py", line 410, in open response = meth(req, response) File "C:\Python27\lib\urllib2.py", line 523, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python27\lib\urllib2.py", line 448, in error return self._call_chain(*args) File "C:\Python27\lib\urllib2.py", line 382, in _call_chain result = func(*args) File "C:\Python27\lib\urllib2.py", line 531, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 503: Service Unavailable I wrote the code as : # -*- coding: utf8 -*- import urllib2 import goslate gs = goslate.Goslate() language_id = gs.detect('wait الدولة') print (gs.get_languages()[language_id]) and now it is not working at all for any input which I tested previously and is giving me same error. I tried finding error resolve on google but nothing helped. This is what I found : Link 1 - StackOverflow I tried updating it with the command as also suggested in the link above : pip install -U goslate but it did not help as it is already the newest updated version that I am using. Also I read in the library documentation that one gets this kind of error for translation when : If you get HTTP 5xx error, it is probably because google has banned your client IP address from transation querying. You could verify it by access google translation service in browser manually. You could try the following to overcome this issue: query through a HTTP/SOCK5 proxy, see Proxy Support using another google domain for translation: gs = Goslate(service_urls=['http://translate.google.de']) wait for 3 seconds before issue another querying I tried using proxy connection but nothing helped. EDIT Can the reason be that google allows only some number of requests per day ? In that case what better can be done ? Is there any other python based library which can help me resolve this? Please someone help me at this. I am new to it.
maybe looking for this: https://pypi.python.org/pypi/textblob it is better than goslate, since textblob is blocked as of now, maybe py-translate could do the trick, https://pypi.python.org/pypi/py-translate/#downloads http://pythonhosted.org/py-translate/devs/api.html from translate import translator translator('en', 'es', 'Hello World!') "py-translate is a CLI Tool for Google Translate written in Python!" the first argument to the translator function is the source language, the second is the target language, and the third is the phrase to be translated, it returns a dictionary, which the documentation refers to as a request interface
How to set class attribute with await in __init__
How can I define a class with await in the constructor or class body? For example what I want: import asyncio # some code class Foo(object): async def __init__(self, settings): self.settings = settings self.pool = await create_pool(dsn) foo = Foo(settings) # it raises: # TypeError: __init__() should return None, not 'coroutine' or example with class body attribute: class Foo(object): self.pool = await create_pool(dsn) # Sure it raises syntax Error def __init__(self, settings): self.settings = settings foo = Foo(settings) My solution (But I would like to see a more elegant way) class Foo(object): def __init__(self, settings): self.settings = settings async def init(self): self.pool = await create_pool(dsn) foo = Foo(settings) await foo.init()
Most magic methods aren't designed to work with async def/await - in general, you should only be using await inside the dedicated asynchronous magic methods - __aiter__, __anext__, __aenter__, and __aexit__. Using it inside other magic methods either won't work at all (as is the case with __init__), or will force you to always use whatever triggers the magic method call in an asynchronous context. Existing asyncio libraries tend to deal with this in one of two ways: First, I've seen the factory pattern used (asyncio-redis, for example): import asyncio dsn = "..." class Foo(object): @classmethod async def create(cls, settings): self = Foo() self.settings = settings self.pool = await create_pool(dsn) return self async def main(settings): settings = "..." foo = await Foo.create(settings) Other libraries use a top-level coroutine function that creates the object, rather than a factory method: import asyncio dsn = "..." async def create_foo(settings): foo = Foo(settings) await foo._init() return foo class Foo(object): def __init__(self, settings): self.settings = settings async def _init(self): self.pool = await create_pool(dsn) async def main(): settings = "..." foo = await create_foo(settings) The create_pool function from aiopg that you want to call in __init__ is actually using this exact pattern. This at least addresses the __init__ issue. I haven't seen class variables that make asynchronous calls in the wild that I can recall, so I don't know that any well-established patterns have emerged.
Addition of list and NumPy number
If you add an integer to a list, you get an error raised by the __add__ function of the list (I suppose): >>> [1,2,3] + 3 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only concatenate list (not "int") to list If you add a list to a NumPy array, I assume that the __add__ function of the NumPy array converts the list to a NumPy array and adds the lists >>> np.array([3]) + [1,2,3] array([4, 5, 6]) But what happens in the following? >>> [1,2,3] + np.array([3]) array([4, 5, 6]) How does the list know how to handle addition with NumPy arrays?
list does not know how to handle addition with NumPy arrays. Even in [1,2,3] + np.array([3]), it's NumPy arrays that handle the addition. As documented in the data model: For objects x and y, first x.__op__(y) is tried. If this is not implemented or returns NotImplemented, y.__rop__(x) is tried. If this is also not implemented or returns NotImplemented, a TypeError exception is raised. But see the following exception: Exception to the previous item: if the left operand is an instance of a built-in type or a new-style class, and the right operand is an instance of a proper subclass of that type or class and overrides the base’s __rop__() method, the right operand’s __rop__() method is tried before the left operand’s __op__() method. When you do [1,2,3] + np.array([3]) what is internally called is np.array([3]).__radd__([1,2,3])
Getting only element from a single-element list in Python?
When a Python list is known to always contain a single item, is there way to access it other than: mylist[0] You may ask, 'Why would you want to?'. Curiosity alone. There seems to be an alternative way to do everything in Python.
Sequence unpacking: singleitem, = mylist # Identical in behavior (byte code produced is the same), # but arguably more readable since a lone trailing comma could be missed: [singleitem] = mylist Explicit use of iterator protocol: singleitem = next(iter(mylist)) Destructive pop: singleitem = mylist.pop() Negative index: singleitem = mylist[-1] Set via single iteration for (because the loop variable remains available with its last value when a loop terminates): for singleitem in mylist: break Many others (combining or varying bits of the above, or otherwise relying on implicit iteration), but you get the idea.
Assigning to vs. from a slice
When reading profile.py of python standard library I came across the assignment statement sys.argv[:] = args, which is used to modify sys.argv to make the program being profiled see the correct command line arguments. I understand that this is different from sys.argv = args[:] in the actual operations, but in effect they look the same to me. Is there a situation when one wants to use one and not the other? And is a[:] = b a common python idiom? UPDATE: in this specific situation why would one choose one over the other? (source can be found in the main function of profile.py)
The difference is, when you use a[:] = b it means you will override whatever is already on a. If you have something else with a reference to a it will change as well, as it keeps referencing the same location. In the other hand, a = b[:] creates a new reference and copy all the values from b to this new reference. So existing references to the old data will keep pointing to the old data. Consider this example: >>> a = [1, 2, 3] >>> b = [4, 5, 6] >>> c = a # c is a reference to the list in a >>> c [1, 2, 3] >>> >>> a[:] = b >>> a # a will have a copy of the list in b [4, 5, 6] >>> c # and c will keep having the same value as a [4, 5, 6] >>> >>> b = [7, 8, 9] >>> a = b[:] >>> a # a has the new value [7, 8, 9] >>> c # c keeps having the old value [4, 5, 6]
On OS X El Capitan I can not upgrade a python package dependent on the six compatibility utilities NOR can I remove six
I am trying to use scrape, but I have a problem. from six.moves import xmlrpc_client as xmlrpclib ImportError: cannot import name xmlrpc_client Then, I tried pip install --upgrade six scrape, but: Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 211, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 311, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 716, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 315, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/var/folders/3h/r_2cxlvd1sjgzfgs4xckc__c0000gn/T/pip-5h86J8-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
I just got around what I think was the same problem. You might consider trying this (sudo, if necessary): pip install scrape --upgrade --ignore-installed six Github is ultimately where I got this answer (and there are a few more suggestions you may consider if this one doesn't solve your problem). It also seems as though this is an El Capitan problem. Also, this technically might be a duplicate. But the answer the other post came up with was installing your own Python rather than relying on the default osx Python, which strikes me as more laborious.
Can a website detect when you are using selenium with chromedriver?
I've been testing out Selenium with Chromedriver and I noticed that some pages can detect that you're using Selenium even though there's no automation at all. Even when I'm just browsing manually just using chrome through Selenium and Xephyr I often get a page saying that suspicious activity was detected. I've checked my user agent, and my browser fingerprint, and they are all exactly identical to the normal chrome browser. When I browse to these sites in normal chrome everything works fine, but the moment I use Selenium I'm detected. In theory chromedriver and chrome should look literally exactly the same to any webserver, but somehow they can detect it. If you want some testcode try out this: from pyvirtualdisplay import Display from selenium import webdriver display = Display(visible=1, size=(1600, 902)) display.start() chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--disable-extensions') chrome_options.add_argument('--profile-directory=Default') chrome_options.add_argument("--incognito") chrome_options.add_argument("--disable-plugins-discovery"); chrome_options.add_argument("--start-maximized") driver = webdriver.Chrome(chrome_options=chrome_options) driver.delete_all_cookies() driver.set_window_size(800,800) driver.set_window_position(0,0) print 'arguments done' driver.get('http://stubhub.com') If you browse around stubhub you'll get redirected and 'blocked' within one or two requests. I've been investigating this and I can't figure out how they can tell that a user is using Selenium. How do they do it? EDIT UPDATE: I installed the Selenium IDE plugin in Firefox and I got banned when I went to stubhub.com in the normal firefox browser with only the additional plugin. EDIT: When I use Fiddler to view the HTTP requests being sent back and forth I've noticed that the 'fake browser\'s' requests often have 'no-cache' in the response header. EDIT: results like this Is there a way to detect that I'm in a Selenium Webdriver page from Javascript suggest that there should be no way to detect when you are using a webdriver. But this evidence suggests otherwise. EDIT: The site uploads a fingerprint to their servers, but I checked and the fingerprint of selenium is identical to the fingerprint when using chrome. EDIT: This is one of the fingerprint payloads that they send to their servers {"appName":"Netscape","platform":"Linuxx86_64","cookies":1,"syslang":"en-US","userlang":"en-US","cpu":"","productSub":"20030107","setTimeout":1,"setInterval":1,"plugins":{"0":"ChromePDFViewer","1":"ShockwaveFlash","2":"WidevineContentDecryptionModule","3":"NativeClient","4":"ChromePDFViewer"},"mimeTypes":{"0":"application/pdf","1":"ShockwaveFlashapplication/x-shockwave-flash","2":"FutureSplashPlayerapplication/futuresplash","3":"WidevineContentDecryptionModuleapplication/x-ppapi-widevine-cdm","4":"NativeClientExecutableapplication/x-nacl","5":"PortableNativeClientExecutableapplication/x-pnacl","6":"PortableDocumentFormatapplication/x-google-chrome-pdf"},"screen":{"width":1600,"height":900,"colorDepth":24},"fonts":{"0":"monospace","1":"DejaVuSerif","2":"Georgia","3":"DejaVuSans","4":"TrebuchetMS","5":"Verdana","6":"AndaleMono","7":"DejaVuSansMono","8":"LiberationMono","9":"NimbusMonoL","10":"CourierNew","11":"Courier"}} Its identical in selenium and in chrome EDIT: VPNs work for a single use but get detected after I load the first page. Clearly some javascript is being run to detect Selenium.
As we've already figured out in the question and the posted answers, there is an anti Web-scraping and a Bot detection service called "Distil Networks" in play here. And, according to the company CEO's interview: Even though they can create new bots, we figured out a way to identify Selenium the a tool they’re using, so we’re blocking Selenium no matter how many times they iterate on that bot. We’re doing that now with Python and a lot of different technologies. Once we see a pattern emerge from one type of bot, then we work to reverse engineer the technology they use and identify it as malicious. It'll take time and additional challenges to understand how exactly they are detecting Selenium, but what can we say for sure at the moment: it's not related to the actions you take with selenium - once you navigate to the site, you get immediately detected and banned. I've tried to add artificial random delays between actions, take a pause after the page is loaded - nothing helped it's not about browser fingerprint either - tried it in multiple browsers with clean profiles and not, incognito modes - nothing helped since, according to the hint in the interview, this was "reverse engineering", I suspect this is done with some JS code being executed in the browser revealing that this is a browser automated via selenium webdriver Decided to post it as an answer, since clearly: Can a website detect when you are using selenium with chromedriver? Yes. Also, what I haven't experimented with is older selenium and older browser versions - in theory, there could be something implemented/added to selenium at a certain point that Distil Networks bot detector currently relies on. Then, if this is the case, we might detect (yeah, let's detect the detector) at what point/version a relevant change was made, look into changelog and changesets and, may be, this could give us more information on where to look and what is it they use to detect a webdriver-powered browser. It's just a theory that needs to be tested.
Getting signals working on PulseAudio's DBus interface?
I'm trying to get a D-Bus signal handler to be called whenever the state of a sink changes in PulseAudio (e.g. becomes inactive). Unfortunately, it isn't being called and I frankly am not sure why. import dbus import dbus.mainloop.glib from gi.repository import GObject dbus.mainloop.glib.DBusGMainLoop(set_as_default=True) bus = dbus.SessionBus() def signal_handler(*args, **kwargs): print('sig: ', args, kwargs) def connect(): import os if 'PULSE_DBUS_SERVER' in os.environ: address = os.environ['PULSE_DBUS_SERVER'] else: bus = dbus.SessionBus() server_lookup = bus.get_object("org.PulseAudio1", "/org/pulseaudio/server_lookup1") address = server_lookup.Get("org.PulseAudio.ServerLookup1", "Address", dbus_interface="org.freedesktop.DBus.Properties") return dbus.connection.Connection(address) conn = connect() core = conn.get_object(object_path='/org/pulseaudio/core1') core.connect_to_signal('StateUpdated', signal_handler) core.ListenForSignal('org.PulseAudio.Core1.Device.StateUpdated', dbus.Array(signature='o'), dbus_interface='org.PulseAudio.Core1') loop = GObject.MainLoop() loop.run()
Try this, works for me. import dbus import os from dbus.mainloop.glib import DBusGMainLoop import gobject def pulse_bus_address(): if 'PULSE_DBUS_SERVER' in os.environ: address = os.environ['PULSE_DBUS_SERVER'] else: bus = dbus.SessionBus() server_lookup = bus.get_object("org.PulseAudio1", "/org/pulseaudio/server_lookup1") address = server_lookup.Get("org.PulseAudio.ServerLookup1", "Address", dbus_interface="org.freedesktop.DBus.Properties") print(address) return address def sig_handler(state): print("State changed to %s" % state) if state == 0: print("Pulseaudio running.") elif state == 1: print("Pulseaudio idle.") elif state == 2: print("Pulseaudio suspended") # setup the glib mainloop DBusGMainLoop(set_as_default=True) loop = gobject.MainLoop() pulse_bus = dbus.connection.Connection(pulse_bus_address()) pulse_core = pulse_bus.get_object(object_path='/org/pulseaudio/core1') pulse_core.ListenForSignal('org.PulseAudio.Core1.Device.StateUpdated', dbus.Array(signature='o'), dbus_interface='org.PulseAudio.Core1') pulse_bus.add_signal_receiver(sig_handler, 'StateUpdated') loop.run() Requires pulseaudio's default.pa to have the following: .ifexists module-dbus-protocol.so load-module module-dbus-protocol .endif
max([x for x in something]) vs max(x for x in something): why is there a difference and what is it?
I was working on a project for class where my code wasn't producing the same results as the reference code. I compared my code with the reference code line by line, they appeared almost exactly the same. Everything seemed to be logically equivalent. Eventually I began replacing lines and testing until I found the line that mattered. Turned out it was something like this (EDIT: exact code is lower down): # my version: max_q = max([x for x in self.getQValues(state)]) # reference version which worked: max_q = max(x for x in self.getQValues(state)) Now, this baffled me. I tried some experiments with the Python (2.7) interpreter, running tests using max on list comprehensions with and without the square brackets. Results seemed to be exactly the same. Even by debugging via PyCharm I could find no reason why my version didn't produce the exact same result as the reference version. Up to this point I thought I had a pretty good handle on how list comprehensions worked (and how the max() function worked), but now I'm not so sure, because this is such a weird discrepancy. What's going on here? Why does my code produce different results than the reference code (in 2.7)? How does passing in a comprehension without brackets differ from passing in a comprehension with brackets? EDIT 2: the exact code was this: # works max_q = max(self.getQValue(nextState, action) for action in legal_actions) # doesn't work (i.e., provides different results) max_q = max([self.getQValue(nextState, action) for action in legal_actions]) I don't think this should be marked as duplicate -- yes, the other question regards the difference between comprehension objects and list objects, but not why max() would provide different results when given a 'some list built by X comprehension', rather than 'X comprehension' alone.
Are you leaking a local variable which is affecting later code? # works action = 'something important' max_q = max(self.getQValue(nextState, action) for action in legal_actions) assert action == 'something important' # doesn't work (i.e., provides different results) max_q = max([self.getQValue(nextState, action) for action in legal_actions]) assert action == 'something important' # fails! Generator and dictionary comprehensions create a new scope, but before py3, list comprehensions do not, for backwards compatibility Easy way to test - change your code to: max_q = max([self.getQValue(nextState, action) for action in legal_actions]) max_q = max(self.getQValue(nextState, action) for action in legal_actions) Assuming self.getQValue is pure, then the only lasting side effect of the first line will be to mess with local variables. If this breaks it, then that's the cause of your problem.
Trie tree match performance in word search
I have debugging a few similar solutions, but wondering if we could improve Trie Tree to partial match prefix (in search method of class Trie, current search method only check if a full word is matched or not) to even improve performance, which could return from a wrong path earlier? I am not very confident for the idea, so seek for advice earlier. I post one of the similar solutions. Thanks. Given a 2D board and a list of words from the dictionary, find all words in the board. Each word must be constructed from letters of sequentially adjacent cell, where "adjacent" cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once in a word. For example, Given words = ["oath","pea","eat","rain"] and board = [ ['o','a','a','n'], ['e','t','a','e'], ['i','h','k','r'], ['i','f','l','v'] ] Return ["eat","oath"] class TrieNode(): def __init__(self): self.children = collections.defaultdict(TrieNode) self.isWord = False class Trie(): def __init__(self): self.root = TrieNode() def insert(self, word): node = self.root for w in word: node = node.children[w] node.isWord = True def search(self, word): node = self.root for w in word: node = node.children.get(w) if not node: return False return node.isWord class Solution(object): def findWords(self, board, words): res = [] trie = Trie() node = trie.root for w in words: trie.insert(w) for i in xrange(len(board)): for j in xrange(len(board[0])): self.dfs(board, node, i, j, "", res) return res def dfs(self, board, node, i, j, path, res): if node.isWord: res.append(path) node.isWord = False if i < 0 or i >= len(board) or j < 0 or j >= len(board[0]): return tmp = board[i][j] node = node.children.get(tmp) if not node: return board[i][j] = "#" self.dfs(board, node, i+1, j, path+tmp, res) self.dfs(board, node, i-1, j, path+tmp, res) self.dfs(board, node, i, j-1, path+tmp, res) self.dfs(board, node, i, j+1, path+tmp, res) board[i][j] = tmp
I don't see anything wrong from the Trie part in your code. But I think the trie's original design already has early returning when detecting any mismatch. Actually, I usually only use regular dict as a trie instead of defaultDict + TrieNode to avoid making the problem over-complicated. You just need to set a "#" key if a certain node is a valid word. And, during insertion, just do node[w] = {}. If you do this, your code can be significantly simplified and early returning will be straightforward, as you will not have a "wrong" key in a node at all! For example, a simple trie containing only 'ab' will look like: {'a': {'b': {'#': {}}}. So when you search for 'cd', as soon as you realized there is no key 'c' in the outermost dict, you can return false. This implementation is similar to yours, but I believe it's easier to understand.
Force compiler when running python setup.py install
Is there a way to explicitly force the compiler for building Cython extensions when running python setup.py install? Where setup.py is of the form: import os.path import numpy as np from setuptools import setup, find_packages, Extension from Cython.Distutils import build_ext setup(name='test', packages=find_packages(), cmdclass={'build_ext': build_ext}, ext_modules = [ Extension("test.func", ["test/func.pyx"]) ], include_dirs=[np.get_include()] ) I'm trying to install a package on Windows 8.1 x64 using Anaconda 3.16, Python 3.4, setuptools 18, Numpy 1.9 and Cython 0.24. The deployment script is adapted from the Cython wiki and this Stack Overflow answer. Makefile.bat :: create and activate a virtual environement with conda conda create --yes -n test_env cython setuptools=18 pywin32 libpython numpy=1.9 python=3 call activate test_env :: activate the MS SDK compiler as explained in the Cython wiki cd C:\Program Files\Microsoft SDKs\Windows\v7.1\ set MSSdk=1 set DISTUTILS_USE_SDK=1 @call .\Bin\SetEnv /x64 /release cd C:\test python setup.py install The problem is that in this case setup.py install still used the mingw compiler included with conda instead of the MS Windows SDK 7.1 one. So the DISTUTILS_USE_SDK=1 and MSSdk=1 don't seem to have an impact on the buid. I'm not sure if activating the MS SDK from within a conda virtualenv might be an issue here. Running python setup.py build_ext --compiler=msvc correctly builds the extension with the MS compiler, but subsequently running the setup.py install, recompiles it with mingw again. Same applies to python setup.py build --compiler=msvc. Also tried running %COMSPEC% /E:ON /V:ON /K "%PROGRAMFILES%\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" as discussed in the answer linked above, but for me this produces a new terminal prompt, coloured in yellow, and stops the install process. Is there a way of forcing the compiler for building this package, for instance, by editing the setup.py?
You can provide (default) command line arguments for distutils in a separate file called setup.cfg (placed parallel to your setup.py). See the docs for more information. To set the compiler use something like: [build] compiler=msvc Now calling python setup.py build is equivalent to calling python setup.py build --compiler=msvc. (You can still direct distutils to use an other complier by calling python setup.py build --compiler=someothercompiler) Now you have (successfully directed distutils to use a msvc compiler. Unfortunately there is no option to tell it which msvc compiler to use. Basically there are two options: One: Do nothing and distutils will try to locate vcvarsall.bat and use that to setup an environment. vcvarsall.bat (and the compiler it sets the environment up for) are part of Visual Studio, so you have to have installed that for it to work. Two: Install the Windows SDK and tell distutils to use that. Be aware that the name DISUTILS_USE_SDK is rather missleading (at least in my opinion). It does NOT in fact tell distutils to use the SDK (and it's setenv.bat) to setup an environment, rather it means that distutils should assume the environment has already been set up. That is why you have to use some kind of Makefile.bat as you have shown in the OP. Side Note: The specific version of VisualStudio or the Windows SDK depends on the targeted python version.
SyntaxError with passing **kwargs and trailing comma
I wonder why this is a SyntaxError in Python 3.4: some_function( filename = "foobar.c", **kwargs, ) It works when removing the trailing comma after **kwargs.
As pointed out by vaultah (who for some reason didn’t bother to post an answer), this was reported on the issue tracker and has been changed since. The syntax will work fine starting with Python 3.6. To be explicit, yes, I want to allow trailing comma even after *args or **kwds. And that's what the patch does. —Guido van Rossum
Mysterious interaction between Python's slice bounds and "stride"
I understand that given an iterable such as >>> it = [1, 2, 3, 4, 5, 6, 7, 8, 9] I can turn it into a list and slice off the ends at arbitrary points with, for example >>> it[1:-2] [2, 3, 4, 5, 6, 7] or reverse it with >>> it[::-1] [9, 8, 7, 6, 5, 4, 3, 2, 1] or combine the two with >>> it[1:-2][::-1] [7, 6, 5, 4, 3, 2] However, trying to accomplish this in a single operation produces in some results that puzzle me: >>> it[1:-2:-1] [] >>>> it[-1:2:-1] [9, 8, 7, 6, 5, 4] >>>> it[-2:1:-1] [8, 7, 6, 5, 4, 3] Only after much trial and error, do I get what I'm looking for: >>> it[-3:0:-1] [7, 6, 5, 4, 3, 2] This makes my head hurt (and can't help readers of my code): >>> it[-3:0:-1] == it[1:-2][::-1] True How can I make sense of this? Should I even be pondering such things? FWYW, my code does a lot of truncating, reversing, and listifying of iterables, and I was looking for something that was faster and clearer (yes, don't laugh) than list(reversed(it[1:-2])).
This is because in a slice like - list[start:stop:step] start is inclusive, resultant list starts at index start. stop is exclusive, that is the resultant list only contains elements till stop - 1 (and not the element at stop). So for your caseit[1:-2] - the 1 is inclusive , that means the slice result starts at index 1 , whereas the -2 is exclusive , hence the last element of the slice index would be from index -3. Hence, if you want the reversed of that, you would have to do it[-3:0:-1] - only then -3 would be included in the sliced result, and the sliced result would go upto 1 index.
String character identity paradox
I'm completely stuck with this >>> s = chr(8263) >>> x = s[0] >>> x is s[0] False How is this possible? Does this mean that accessing a string character by indexing create a new instance of the same character? Let's experiment: >>> L = [s[0] for _ in range(1000)] >>> len(set(L)) 1 >>> ids = map(id, L) >>> len(set(ids)) 1000 >>> Yikes what a waste of bytes ;) Or does it mean that str.__getitem__ has a hidden feature? Can somebody explain? But this is not the end of my surprise: >>> s = chr(8263) >>> t = s >>> print(t is s, id(t) == id(s)) True True This is clear: t is an alias for s, so they represent the same object and identities coincide. But again, how the following is possible: >>> print(t[0] is s[0]) False s and t are the same object so what? But worse: >>> print(id(t[0]) == id(s[0])) True t[0] and s[0] have not been garbage collected, are considered as the same object by the is operator but have different ids? Can somebody explain?
There are two point to make here. First, Python does indeed create a new character with the __getitem__ call, but only if that character has ordinal value greater than 256. Observe: >>> s = chr(256) >>> s[0] is s True >>> t = chr(257) >>> t[0] is t False This is because internally, the compiled getitem function checks the ordinal value of the single chracter and calls the get_latin1_char if that value is 256 or less. This allows some single-character strings to be shared. Otherwise, a new unicode object is created. The second issue concerns garbage collection and shows that the interpreter can reuse memory addresses very quickly. When you write: >>> s = t # chr(257) >>> t[0] is s[0] False Python creates two new single character strings and then compares their memory addresses. These are different (we have different objects as per the explanation above) so comparing the objects with is returns False. On the other hand, we can have the seemingly paradoxical situation that: >>> id(t[0]) == id(s[0]) True because the interpreter quickly reuses the memory address of t[0] when it creates the new string s[0] at a later moment in time. If you examine the bytecode this line produces (e.g. with dis - see below), you see that the integer address for each side is built in turn (a new string object is created and then id is called on it). The references to the object t[0] drop to zero as soon as id(t[0]) is returned (we're comparing integers now, not the object) and so s[0] can reuse the same memory address when it is created afterwards. You can't rely on this to always be the case however. For completeness, here is the disassembled bytecode for the line id(t[0]) == id(s[0]) which I've annotated. You can see that the lifetime of t[0] ends before s[0] is created (there are no references to it) hence its memory can be reused. 2 0 LOAD_GLOBAL 0 (id) 3 LOAD_GLOBAL 1 (t) 6 LOAD_CONST 1 (0) 9 BINARY_SUBSCR # t[0] is created 10 CALL_FUNCTION 1 # id(t[0]) is computed... # ...lifetime of string t[0] over 13 LOAD_GLOBAL 0 (id) 16 LOAD_GLOBAL 2 (s) 19 LOAD_CONST 1 (0) 22 BINARY_SUBSCR # s[0] is created... # ...free to reuse t[0] memory 23 CALL_FUNCTION 1 # id(s[0]) is computed 26 COMPARE_OP 2 (==) # the two ids are compared 29 RETURN_VALUE
Getting Spark, Python, and MongoDB to work together
I'm having difficulty getting these components to knit together properly. I have Spark installed and working succesfully, I can run jobs locally, standalone, and also via YARN. I have followed the steps advised (to the best of my knowledge) here and here I'm working on Ubuntu and the various component versions I have are Spark spark-1.5.1-bin-hadoop2.6 Hadoop hadoop-2.6.1 Mongo 2.6.10 Mongo-Hadoop connector cloned from https://github.com/mongodb/mongo-hadoop.git Python 2.7.10 I had some difficulty following the various steps such as which jars to add to which path, so what I have added are in /usr/local/share/hadoop-2.6.1/share/hadoop/mapreduce I have added mongo-hadoop-core-1.5.0-SNAPSHOT.jar the following environment variables export HADOOP_HOME="/usr/local/share/hadoop-2.6.1" export PATH=$PATH:$HADOOP_HOME/bin export SPARK_HOME="/usr/local/share/spark-1.5.1-bin-hadoop2.6" export PYTHONPATH="/usr/local/share/mongo-hadoop/spark/src/main/python" export PATH=$PATH:$SPARK_HOME/bin My Python program is basic from pyspark import SparkContext, SparkConf import pymongo_spark pymongo_spark.activate() def main(): conf = SparkConf().setAppName("pyspark test") sc = SparkContext(conf=conf) rdd = sc.mongoRDD( 'mongodb://username:password@localhost:27017/mydb.mycollection') if __name__ == '__main__': main() I am running it using the command $SPARK_HOME/bin/spark-submit --driver-class-path /usr/local/share/mongo-hadoop/spark/build/libs/ --master local[4] ~/sparkPythonExample/SparkPythonExample.py and I am getting the following output as a result Traceback (most recent call last): File "/home/me/sparkPythonExample/SparkPythonExample.py", line 24, in <module> main() File "/home/me/sparkPythonExample/SparkPythonExample.py", line 17, in main rdd = sc.mongoRDD('mongodb://username:password@localhost:27017/mydb.mycollection') File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 161, in mongoRDD return self.mongoPairRDD(connection_string, config).values() File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 143, in mongoPairRDD _ensure_pickles(self) File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 80, in _ensure_pickles orig_tb) py4j.protocol.Py4JError According to here This exception is raised when an exception occurs in the Java client code. For example, if you try to pop an element from an empty stack. The instance of the Java exception thrown is stored in the java_exception member. Looking at the source code for pymongo_spark.py and the line throwing the error, it says "Error while communicating with the JVM. Is the MongoDB Spark jar on Spark's CLASSPATH? : " So in response I have tried to be sure the right jars are being passed, but I might be doing this all wrong, see below $SPARK_HOME/bin/spark-submit --jars /usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-hadoop-spark-1.5.0-SNAPSHOT.jar,/usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-java-driver-3.0.4.jar --driver-class-path /usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-java-driver-3.0.4.jar,/usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-hadoop-spark-1.5.0-SNAPSHOT.jar --master local[4] ~/sparkPythonExample/SparkPythonExample.py I have imported pymongo to the same python program to verify that I can at least access MongoDB using that, and I can. I know there are quite a few moving parts here so if I can provide any more useful information please let me know.
Updates: 2016-07-04 Since the last update MongoDB Spark Connector matured quite a lot. It provides up-to-date binaries and data source based API but it is using SparkConf configuration so it is subjectively less flexible than the Stratio/Spark-MongoDB. 2016-03-30 Since the original answer I found two different ways to connect to MongoDB from Spark: mongodb/mongo-spark Stratio/Spark-MongoDB While the former one seems to be relatively immature the latter one looks like a much better choice than a Mongo-Hadoop connector and provides a Spark SQL API. # Adjust Scala and package version according to your setup # although officially 0.11 supports only Spark 1.5 # I haven't encountered any issues on 1.6.1 bin/pyspark --packages com.stratio.datasource:spark-mongodb_2.11:0.11.0 df = (sqlContext.read .format("com.stratio.datasource.mongodb") .options(host="mongo:27017", database="foo", collection="bar") .load()) df.show() ## +---+----+--------------------+ ## | x| y| _id| ## +---+----+--------------------+ ## |1.0|-1.0|56fbe6f6e4120712c...| ## |0.0| 4.0|56fbe701e4120712c...| ## +---+----+--------------------+ It seems to be much more stable than mongo-hadoop-spark, supports predicate pushdown without static configuration and simply works. The original answer: Indeed, there are quite a few moving parts here. I tried to make it a little bit more manageable by building a simple Docker image which roughly matches described configuration (I've omitted Hadoop libraries for brevity though). You can find complete source on GitHub (DOI 10.5281/zenodo.47882) and build it from scratch: git clone https://github.com/zero323/docker-mongo-spark.git cd docker-mongo-spark docker build -t zero323/mongo-spark . or download an image I've pushed to Docker Hub so you can simply docker pull zero323/mongo-spark): Start images: docker run -d --name mongo mongo:2.6 docker run -i -t --link mongo:mongo zero323/mongo-spark /bin/bash Start PySpark shell passing --jars and --driver-class-path: pyspark --jars ${JARS} --driver-class-path ${SPARK_DRIVER_EXTRA_CLASSPATH} And finally see how it works: import pymongo import pymongo_spark mongo_url = 'mongodb://mongo:27017/' client = pymongo.MongoClient(mongo_url) client.foo.bar.insert_many([ {"x": 1.0, "y": -1.0}, {"x": 0.0, "y": 4.0}]) client.close() pymongo_spark.activate() rdd = (sc.mongoRDD('{0}foo.bar'.format(mongo_url)) .map(lambda doc: (doc.get('x'), doc.get('y')))) rdd.collect() ## [(1.0, -1.0), (0.0, 4.0)] Please note that mongo-hadoop seems to close the connection after the first action. So calling for example rdd.count() after the collect will throw an exception. Based on different problems I've encountered creating this image I tend to believe that passing mongo-hadoop-1.5.0-SNAPSHOT.jar and mongo-hadoop-spark-1.5.0-SNAPSHOT.jar to both --jars and --driver-class-path is the only hard requirement. Notes: This image is loosely based on jaceklaskowski/docker-spark so please be sure to send some good karma to @jacek-laskowski if it helps. If don't require a development version including new API then using --packages is most likely a better option.
python Ubuntu error install Pillow 3.0.0
I recently failed trying to install Pillow 3.0.0 on my Ubuntu 14.04. No matter what I do (download and try to "sudo python setup.py install" or "sudo -H pip install Pillow==3.0.0 --no-cache-dir") everytime I get error: copying PIL/TiffImagePlugin.py -> build/lib.linux-x86_64-2.7/PIL running egg_info writing Pillow.egg-info/PKG-INFO writing top-level names to Pillow.egg-info/top_level.txt writing dependency_links to Pillow.egg-info/dependency_links.txt warning: manifest_maker: standard file '-c' not found reading manifest file 'Pillow.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'Pillow.egg-info/SOURCES.txt' copying PIL/OleFileIO-README.md -> build/lib.linux-x86_64-2.7/PIL running build_ext Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-3waMkf/Pillow/setup.py", line 767, in <module> zip_safe=not debug_build(), File "/usr/lib/python2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/local/lib/python2.7/dist-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/usr/lib/python2.7/distutils/command/install.py", line 601, in run self.run_command('build') File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run self.run_command(cmd_name) File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/lib/python2.7/distutils/command/build_ext.py", line 337, in run self.build_extensions() File "/tmp/pip-build-3waMkf/Pillow/setup.py", line 515, in build_extensions % (f, f)) ValueError: --enable-zlib requested but zlib not found, aborting. ---------------------------------------- Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-3waMkf/Pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-S_sHo7-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-3waMkf/Pillow
Did you install the dependencies for pillow ? You can install them by $ sudo apt-get build-dep python-imaging $ sudo apt-get install libjpeg8 libjpeg62-dev libfreetype6 libfreetype6-dev
Django ignores router when running tests?
I have a django application that uses 2 database connections: To connect to the actual data the app is to produce To a reference master data system, that is maintained completely outside my control The issue that I'm having, is that my webapp can absolutely NOT touch the data in the 2nd database. I solved most of the issues by using 2 (sub)apps, one for every database connection. I created a router file that router any migration, and writing to the first app I also made all the models in the 2nd app non managed, using the model.meta.managed = False option. To be sure, the user I connect to the 2nd database has read only access This works fine for migrations and running. However, when I try to run tests using django testcase, Django tries to delete and create a test_ database on the 2nd database connection. How can I make sure that Django will NEVER update/delete/insert/drop/truncate over the 2nd connection How can I run tests, that do not try to create the second database, but do create the first. Thanks! edited: code model (for the 2nd app, that should not be managed): from django.db import models class MdmMeta(object): db_tablespace = 'MDM_ADM' managed = False ordering = ['name'] class ActiveManager(models.Manager): def get_queryset(self): return super(ActiveManager, self).get_queryset().filter(lifecyclestatus='active') class MdmType(models.Model): entity_guid = models.PositiveIntegerField(db_column='ENTITYGUID') entity_name = models.CharField(max_length=255, db_column='ENTITYNAME') entry_guid = models.PositiveIntegerField(primary_key=True, db_column='ENTRYGUID') name = models.CharField(max_length=255, db_column='NAME') description = models.CharField(max_length=512, db_column='DESCRIPTION') lifecyclestatus = models.CharField(max_length=255, db_column='LIFECYCLESTATUS') # active_manager = ActiveManager() def save(self, *args, **kwargs): raise Exception('Do not save MDM models!') def delete(self, *args, **kwargs): raise Exception('Do not delete MDM models!') def __str__(self): return self.name class Meta(MdmMeta): abstract = True # Create your models here. class MdmSpecies(MdmType): class Meta(MdmMeta): db_table = 'MDM_SPECIES' verbose_name = 'Species' verbose_name_plural = 'Species' class MdmVariety(MdmType): class Meta(MdmMeta): db_table = 'MDM_VARIETY' verbose_name = 'Variety' verbose_name_plural = 'Varieties' ... router: __author__ = 'CoesseWa' class MdmRouter(object): def db_for_read(self, model, **hints): if model._meta.app_label == 'mdm': # return 'default' return 'mdm_db' # trying to use one database connection return 'default' def db_for_write(self, model, **hints): return 'default' def allow_relation(self, obj1, obj2, **hints): return None def allow_migrate(self, db, model): if model._meta.app_label == 'mdm': return False settings: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.oracle', 'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=1521)))(CONNECT_DATA=(SID=%s)))' % (get_env_variable('LIMS_MIGRATION_HOST'), get_env_variable('LIMS_MIGRATION_SID')), 'USER': 'LIMS_MIGRATION', 'PASSWORD': get_env_variable('LIMS_MIGRATION_PASSWORD'), }, 'mdm_db': { 'ENGINE': 'django.db.backends.oracle', 'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=GB3P)(PORT=1521)))' '(CONNECT_DATA=(SID=GB3P)))', 'USER': 'MDM', 'PASSWORD': get_env_variable('MDM_DB_PASSWORD'), }, } one testcase: from django.test.testcases import TestCase __author__ = 'CoesseWa' class ModelTest(TestCase): def test_getting_guid_for_mdm_field(self): self.assertIsNotNone(1) output from when running this tests: ... Destroying old test user... (before this point, django creates the test database for my first connection = OK) Creating test user... => This next lines should never happen. Fails because I use a read only user (luckily) Creating test database for alias 'mdm_db'... Failed (ORA-01031: insufficient privileges Got an error creating the test database: ORA-01031: insufficient privileges
I solved this by changing the DATABASES.TEST definition. I added the TEST['MIRROR'] = 'default' to the mdm_db database entry. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.oracle', 'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=1521)))(CONNECT_DATA=(SID=%s)))' % (get_env_variable('LIMS_MIGRATION_HOST'), get_env_variable('LIMS_MIGRATION_SID')), 'USER': 'LIMS_MIGRATION', 'PASSWORD': get_env_variable('LIMS_MIGRATION_PASSWORD'), }, 'mdm_db': { 'ENGINE': 'django.db.backends.oracle', 'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=GB3P)(PORT=1521)))' '(CONNECT_DATA=(SID=GB3P)))', 'USER': 'MDM', 'PASSWORD': get_env_variable('MDM_DB_PASSWORD'), 'TEST': { 'MIRROR': 'default', # Added this setting } }, } According to the documentation this option can be abused to skip database creation: However, the replica database has been configured as a test mirror (using the MIRROR test setting), indicating that under testing, replica should be treated as a mirror of default. When the test environment is configured, a test version of replica will not be created. Instead the connection to replica will be redirected to point at default. Running my tests now skips creation of the second database. Thanks for all the input!!
How to check if Celery/Supervisor is running using Python
How to write a script in Python that outputs if celery is running on a machine (Ubuntu)? My use-case. I have a simple python file with some tasks. I'm not using Django or Flask. I use supervisor to run the task queue. For example, tasks.py from celery import Celery, task app = Celery('tasks') @app.task() def add_together(a, b): return a + b Supervisor: [program:celery_worker] directory = /var/app/ command=celery -A tasks worker info This all works, I now want to have page which checks if celery/supervisor process is running. i.e. something like this maybe using Flask allowing me to host the page giving a 200 status allowing me to load balance. For example... check_status.py from flask import Flask app = Flask(__name__) @app.route('/') def status_check(): #check supervisor is running if supervisor: return render_template('up.html') else: return render_template('down.html') if __name__ == '__main__': app.run()
You can run the celery status command via code by importing the celery.bin.celery package: import celery import celery.bin.base import celery.bin.celery import celery.platforms app = celery.Celery('tasks', broker='redis://') status = celery.bin.celery.CeleryCommand.commands['status']() status.app = status.get_app() def celery_is_up(): try: status.run() return True except celery.bin.base.Error as e: if e.status == celery.platforms.EX_UNAVAILABLE: return False raise e if __name__ == '__main__': if celery_is_up(): print('Celery up!') else: print('Celery not responding...')
Create and import helper functions in tests without creating packages in test directory using py.test
Question How can I import helper functions in test files without creating packages in the test directory? Context I'd like to create a test helper function that I can import in several tests. Say, something like this: # In common_file.py def assert_a_general_property_between(x, y): # test a specific relationship between x and y assert ... # In test/my_test.py def test_something_with(x): some_value = some_function_of_(x) assert_a_general_property_between(x, some_value) Using Python 3.5, with py.test 2.8.2 Current "solution" I'm currently doing this via importing a module inside my project's test directory (which is now a package), but I'd like to do it with some other mechanism if possible (so that my test directory doesn't have packages but just tests, and the tests can be run on an installed version of the package, as is recommended here in the py.test documentation on good practices).
my option is to create an extra dir in tests dir and add it to pythonpath in the conftest so. tests/ helpers/ utils.py ... conftest.py setup.cfg in the conftest.py import sys import os sys.path.append(os.path.join(os.path.dirname(__file__), 'helpers') in setup.cfg [pytest] norecursedirs=tests/helpers this module will be available with `import utils', only be careful to name clashing.
Read cell content in an ipython notebook
I have an ipython notebook with mixed markdown and python cells. And I'd like some of my python cells to read the adjacent markdown cells and process them as input. An example of the desired situation: CELL 1 (markdown): SQL Code to execute CELL 2 (markdown): select * from tbl where x=1 CELL 3 (python) : mysql.query(ipython.previous_cell.content) (The syntax ipython.previous_cell.content is made up) Executing "CELL 3" should be equivalent to mysql.query("select * from tbl where x=1") How can this be done ?
I think you are trying to attack the problem the wrong way. First yes, it is possible to get the adjacent markdown cell in really hackish way that would not work in headless notebook execution. What you want to do is use IPython cell magics, that allow arbitrary syntax as long as the cell starts with 2 percent signs followed by an identifier. Typically you want SQL cells. You can refer to the documentation about cells magics or I can show you how to build that : from IPython.core.magic import ( Magics, magics_class, cell_magic, line_magic ) @magics_class class StoreSQL(Magics): def __init__(self, shell=None, **kwargs): super().__init__(shell=shell, **kwargs) self._store = [] # inject our store in user availlable namespace under __mystore # name shell.user_ns['__mystore'] = self._store @cell_magic def sql(self, line, cell): """store the cell in the store""" self._store.append(cell) @line_magic def showsql(self, line): """show all recorded statements""" print(self._store) ## use ipython load_ext mechanisme here if distributed get_ipython().register_magics(StoreSQL) Now you can use SQL syntax in your python cells: %%sql select * from foo Where QUX Bar a second cell: %%sql Insert Cheezburger into Can_I_HAZ check what we executed (the 3 dashes show the input /output delimitation, you do not have to type them): %showsql --- ['select * from foo Where QUX Bar', 'Insert Cheezburger into Can_I_HAZ'] And what you asked at the beginning in your question: mysql.query(__mystore[-1]) This of course does require that you execute the previous cells in the right order, nothing prevent you from using the %%sql syntax to name your cells, e.g if _store is a dict, or better a class where you overwrite __getattr__, to act like __getitem__ to access fields with dot syntax . This is left as an exercise to the reader, or end see of the response: @cell_magic def sql(self, line, cell): """store the cell in the store""" self._store[line.strip()] = cell you can then use sql cell like %%sql A1 set foo TO Bar where ID=9 And then in your Python cells mysql.execute(__mystore.A1) I would also strongly suggest looking at Catherine Develin SqlMagic for IPython, and this Notebook gist on GitHub that show this all thing live. In the comment you seem to say you want to add pig, nothing prevent you from having a %%pig magic neither. It is also possible to inject Javascript to enable correct Syntax Highlighting of SQL and PIG, but that's beyond the scope of this question.
What does the built-in function sum do with sum(list, [])?
When I want to unfold a list, I found a way like below: >>> a = [[1, 2], [3, 4], [5, 6]] >>> a [[1, 2], [3, 4], [5, 6]] >>> sum(a, []) [1, 2, 3, 4, 5, 6] I don't know what happened in these lines, and the documentation states: sum(iterable[, start]) Sums start and the items of an iterable from left to right and returns the total. start defaults to 0. The iterable's items are normally numbers, and the start value is not allowed to be a string. For some use cases, there are good alternatives to sum(). The preferred, fast way to concatenate a sequence of strings is by calling ''.join(sequence). To add floating point values with extended precision, see math.fsum(). To concatenate a series of iterables, consider using itertools.chain(). New in version 2.3. Don't you think that start should be a number? Why [] can be written here? (sum(a, []))
Don't you think that start should be a number? start is a number, by default; 0, per the documentation you've quoted. Hence when you do e.g.: sum((1, 2)) it is evaluated as 0 + 1 + 2 and it equals 3 and everyone's happy. If you want to start from a different number, you can supply that instead: >>> sum((1, 2), 3) 6 So far, so good. However, there are other things you can use + on, like lists: >>> ['foo'] + ['bar'] ['foo', 'bar'] If you try to use sum for this, though, expecting the same result, you get a TypeError: >>> sum((['foo'], ['bar'])) Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> sum((['foo'], ['bar'])) TypeError: unsupported operand type(s) for +: 'int' and 'list' because it's now doing 0 + ['foo'] + ['bar']. To fix this, you can supply your own start as [], so it becomes [] + ['foo'] + ['bar'] and all is good again. So to answer: Why [] can be written here? because although start defaults to a number, it doesn't have to be one; other things can be added too, and that comes in handy for things exactly like what you're currently doing.
The similar method from the nltk module produces different results on different machines. Why?
I have taught a few introductory classes to text mining with Python, and the class tried the similar method with the provided practice texts. Some students got different results for text1.similar() than others. All versions and etc. were the same. Does anyone know why these differences would occur? Thanks. Code used at command line. python >>> import nltk >>> nltk.download() #here you use the pop-up window to download texts >>> from nltk.book import * *** Introductory Examples for the NLTK Book *** Loading text1, ..., text9 and sent1, ..., sent9 Type the name of the text or sentence to view it. Type: 'texts()' or 'sents()' to list the materials. text1: Moby Dick by Herman Melville 1851 text2: Sense and Sensibility by Jane Austen 1811 text3: The Book of Genesis text4: Inaugural Address Corpus text5: Chat Corpus text6: Monty Python and the Holy Grail text7: Wall Street Journal text8: Personals Corpus text9: The Man Who Was Thursday by G . K . Chesterton 1908 >>>>>> text1.similar("monstrous") mean part maddens doleful gamesome subtly uncommon careful untoward exasperate loving passing mouldy christian few true mystifying imperial modifies contemptible >>> text2.similar("monstrous") very heartily so exceedingly remarkably as vast a great amazingly extremely good sweet Those lists of terms returned by the similar method differ from user to user, they have many words in common, but they are not identical lists. All users were using the same OS, and the same versions of python and nltk. I hope that makes the question clearer. Thanks.
In your example there are 40 other words which have exactly one context in common with the word 'monstrous'. In the similar function a Counter object is used to count the words with similar contexts and then the most common ones (default 20) are printed. Since all 40 have the same frequency the order can differ. From the doc of Counter.most_common: Elements with equal counts are ordered arbitrarily I checked the frequency of the similar words with this code (which is essentially a copy of the relevant part of the function code): from nltk.book import * from nltk.util import tokenwrap from nltk.compat import Counter word = 'monstrous' num = 20 text1.similar(word) wci = text1._word_context_index._word_to_contexts if word in wci.conditions(): contexts = set(wci[word]) fd = Counter(w for w in wci.conditions() for c in wci[w] if c in contexts and not w == word) words = [w for w, _ in fd.most_common(num)] # print(tokenwrap(words)) print(fd) print(len(fd)) print(fd.most_common(num)) Output: (different runs give different output for me) Counter({'doleful': 1, 'curious': 1, 'delightfully': 1, 'careful': 1, 'uncommon': 1, 'mean': 1, 'perilous': 1, 'fearless': 1, 'imperial': 1, 'christian': 1, 'trustworthy': 1, 'untoward': 1, 'maddens': 1, 'true': 1, 'contemptible': 1, 'subtly': 1, 'wise': 1, 'lamentable': 1, 'tyrannical': 1, 'puzzled': 1, 'vexatious': 1, 'part': 1, 'gamesome': 1, 'determined': 1, 'reliable': 1, 'lazy': 1, 'passing': 1, 'modifies': 1, 'few': 1, 'horrible': 1, 'candid': 1, 'exasperate': 1, 'pitiable': 1, 'abundant': 1, 'mystifying': 1, 'mouldy': 1, 'loving': 1, 'domineering': 1, 'impalpable': 1, 'singular': 1})
Distribution of Number of Digits of Random Numbers
I encounter this curious phenomenon trying to implement a UUID generator in JavaScript. Basically, in JavaScript, if I generate a large list of random numbers with the built-in Math.random() on Node 4.2.2: var records = {}; var l; for (var i=0; i < 1e6; i += 1) { l = String(Math.random()).length; if (records[l]) { records[l] += 1; } else { records[l] = 1; } } console.log(records); The numbers of digits have a strange pattern: { '12': 1, '13': 11, '14': 65, '15': 663, '16': 6619, '17': 66378, '18': 611441, '19': 281175, '20': 30379, '21': 2939, '22': 282, '23': 44, '24': 3 } I thought this is a quirk of the random number generator of V8, but similar pattern appears in Python 3.4.3: 12 : 2 13 : 5 14 : 64 15 : 672 16 : 6736 17 : 66861 18 : 610907 19 : 280945 20 : 30455 21 : 3129 22 : 224 And the Python code is as follows: import random random.seed() records = {} for i in range(0, 1000000): n = random.random() l = len(str(n)) try: records[l] += 1 except KeyError: records[l] = 1; for i in sorted(records): print(i, ':', records[i]) The pattern from 18 to below is expected: say if random number should have 20 digits, then if the last digit of a number is 0, it effectively has only 19 digits. If the random number generator is good, the probability of that happening is roughly 1/10. But why the pattern is reversed for 19 and beyond? I guess this is related to float numbers' binary representation, but I can't figure out exactly why.
The reason is indeed related to floating point representation. A floating point number representation has a maximum number of (binary) digits it can represent, and a limited exponent value range. Now when you print this out without using scientific notation, you might in some cases need to have some zeroes after the decimal point before the significant digits start to follow. You can visualize this effect by printing those random numbers which have the longest length when converted to string: var records = {}; var l, r; for (var i=0; i < 1e6; i += 1) { r = Math.random(); l = String(r).length; if (l === 23) { console.log(r); } if (records[l]) { records[l] += 1; } else { records[l] = 1; } } This prints only the 23-long strings, and you will get numbers like these: 0.000007411070483631654 0.000053944830052166104 0.000018188989763578967 0.000029525788901141325 0.000009613635131744402 0.000005937417234758158 0.000021099748521158368 Notice the zeroes before the first non-zero digit. These are actually not stored in the number part of a floating point representation, but implied by its exponent part. If you were to take out the leading zeroes, and then make a count: var records = {}; var l, r, s; for (var i=0; i < 1e6; i += 1) { r = Math.random(); s = String(r).replace(/^[0\.]+/, ''); l = s.length; if (records[l]) { records[l] += 1; } else { records[l] = 1; } } ... you'll get results which are less strange. However, you will see some irregularity that is due to how javascript converts tiny numbers to string: when they get too small, the scientific notation is used in the string representation. You can see this with the following script (not sure if every browser has the same breaking point, so maybe you need to play a bit with the number): var i = 0.00000123456789012345678; console.log(String(i), String(i/10)); This gives me the following output: 0.0000012345678901234567 1.2345678901234568e-7 So very small numbers will get a more fixed string length as a result, quite often 22 characters, while in the non-scientific notation a length of 23 is common. This influences also the second script I provided and length 22 will get more hits than 23. It should be noted that javascript does not switch to scientific notation when converting to string in binary representation: var i = 0.1234567890123456789e-120; console.log(i.toString(2)); The above will print a string of over 450 binary digits!
Random number in the range 1 to sys.maxsize is always 1 mod 2^10
I am trying to find the statistical properties of the PRNGs available in Python (2.7.10) by using the frequency test, runs test and the chi squared test. For carrying out the frequency test, I need to convert the generated random number to its binary representation and then count the distribution of 1's and 0's. I was experimenting with the binary representation of the random numbers on the python console and observed this weird behavior: >>> for n in random.sample(xrange(1, sys.maxsize), 50): ... print '{0:b}'.format(n) ... 101101110011011001110011110110101101101101111111101000000000001 110000101001001011101001110111111110011000101011100010000000001 110111101101110011100010001010000101011111110010001110000000001 100001111010011000101001000001000011001111100000001010000000001 1111000010010011111100111110110100100011110111010000000000001 111000001011101011101110100001001001000011011001110110000000001 1000100111011000111000101010000101010100110111000100000000001 11101001000001101111110101111011001000100011011011010000000001 110011010111101101011000110011011001110001111000001010000000001 110110110110111100011111110111011111101000011001100000000001 100010010000011101011100110101011110111100001100100000000000001 10111100011010011010001000101011001110010010000010010000000001 101011100110110001010110000101100000111111011101011000000000001 1111110010110010000111111000010001101011011010101110000000001 11100010101101110110101000101101011011111101101000010000000001 10011110110110010110011010000110010010111001111001010000000001 110110011100111010100111100100000100011101100001100000000000001 100110011001101011110011010101111101100010000111001010000000001 111000101101100111110010110110100110111001000101000000000000001 111111101000010111001011111100111100011101001011010000000001 11110001111100000111010010011111010101101110111001010000000001 100001100101101100010101111100111101111001101010101010000000001 11101010110011000001101110000000001111010001110111000000000001 100111000110111010001110110101001011100101111101010000000001 100001101100000011101101010101111111011010111110111110000000001 100010010011110110111111111000010001101100111001001100000000001 110011111110010011000110101010101001001010000100011010000000001 1111011010100001001101101000011100001011001110010100000000001 110110011101100101001100111010101111001011111101100000000000001 1010001110100101001001011111000111011100001100000110000000001 1000101110010011011000001011010110001000110100100100000000001 11111110011001011100111110110111000001000100100010000000000001 101111101010000101010111111111000001100101111001011110000000001 10010010111111111100000001010010101100111001100000000000001 111110000001110010001110111101110101010110001110000000000000001 100000101101000110101010010000101101000011111010001110000000001 101001011101100011001000011010010000000111110111100010000000001 10110101010000111010110111001111011000001111001100110000000001 10110111100100100011100101001100000000101110100100010000000001 10010111110001011101001110000111011010110100110111110000000001 111011110010110111011011101011001100001000111001010100000000001 101001010001010100010010010001100111101110101111000110000000001 101011111010000101010101000110001101001001011110000000000001 1010001010111101101010111110110110000001111101101110000000001 10111111111010001000110000101101010101011010101100000000001 101011101010110000001111010100100110000011111100100100000000001 111100001101111010100111010001010010000010110110010110000000001 100111111000100110100001110101000010111111010010010000000000001 100111100001011100011000000000101100111111000111100110000000001 110110100000110111011101110101101000101110111111010110000000001 >>> As you can see, all numbers end in 0000000001, i.e all numbers are 1 mod 2^10. Why is this so ? Also, this behavior is observed when the range is 1 to sys.maxsize. If the range is specified to be 1 to 2^40, this is not observed. I want to know the reason for this behavior and whether there is anything wrong in my code. The documentation for the random library that implements the PRNGs that I am using is here. Let me know if I should provide any more information.
@roeland hinted at the cause: in Python 2, sample() uses int(random.random() * n) repeatedly. Look at the source code (in your Python's Lib/random.py) for full details. In short, random.random() returns no more than 53 significant (non-zero) leading bits; then int() fills the rest of the low-order bits with zeroes (you're obviously on a machine where sys.maxsize == 2**63 - 1); then indexing your base (xrange(1, sys.maxsize)) by an even integer with "a lot" of of low-order 0 bits always returns an odd integer with the same number of low-order 0 bits (except for the last). In Python 3 none of that happens - random in Python 3 uses stronger algorithms, and only falls back to random.random() when necessary. For example, here under Python 3.4.3: >>> hex(random.randrange(10**70)) '0x91fc11ed768be3a454bd66f593c218d8bbfa3b99f6285291e1d9f964a9' >>> hex(random.randrange(10**70)) '0x7b07ff02b6676801e33094fca2fcca7f6e235481c479c521643b1acaf4' EDIT Here's a more directly relevant example, under 3.4.3 on a 64-bit box: >>> import random, sys >>> sys.maxsize == 2**63 - 1 True >>> for i in random.sample(range(1, sys.maxsize), 6): ... print(bin(i)) 0b10001100101001001111110110011111000100110100111001100000010110 0b100111100110110100111101001100001100110001110010000101101000101 0b1100000001110000110100111101101010110001100110101111011100111 0b111110100001111100101001001001101101100100011001001010100001110 0b1100110100000011100010000011010010100100110111001111100110100 0b10011010000110101010101110001000101110111100100001111101110111 Python 3 doesn't invoke random.random() at all in this case, but instead iteratively grabs chunks of 32 bits from the underlying Mersenne Twister (32-bit unsigned ints are "the natural" outputs from this implementation of MT) , pasting them together to build a suitable index. So, in Python 3, platform floats have nothing to do with it; in Python 2, quirks of float behavior have everything to do with it.
In TensorFlow, what is the difference between Session.run() and Tensor.eval()?
TensorFlow has two ways to evaluate part of graph: Session.run on a list of variables and Tensor.eval. Is there a difference between these two?
If you have a Tensor t, calling t.eval() is equivalent to calling tf.get_default_session().run(t). You can make a session the default as follows: t = tf.constant(42.0) sess = tf.Session() with sess.as_default(): # or `with sess:` to close on exit assert sess is tf.get_default_session() assert t.eval() == sess.run(t) The most important different is that you can use sess.run() to fetch the values of many tensors in the same step: t = tf.constant(42.0) u = tf.constant(37.0) tu = tf.mul(t, u) ut = tf.mul(u, t) with sess.as_default(): tu.eval() # runs one step ut.eval() # runs one step sess.run([tu, ut]) # runs a single step Note that each call to eval and run will execute the whole graph from scratch. To cache the result of a computation, assign it to a tf.Variable.
Speeding-up "for-loop" in image analysis when iterations are up to 40,000
The details of the prerequisites of this code are quite long so I'll try my best to summarize. WB/RG/BYColor is the base image, FIDO is an overlay of this base image which is applied to it. S_wb/rg/by are the final output images. WB/RG/BYColor are the same size as FIDO. For each unique element in FIDO, we want to calculate the average color of that region within the base images. The below code does this, but as numFIDOs is very large (up to 40,000), this takes a long time. The averages are computed for the three separate RGB channels. sX=200 sY=200 S_wb = np.zeros((sX, sY)) S_rg = np.zeros((sX, sY)) S_by = np.zeros((sX, sY)) uniqueFIDOs, unique_counts = np.unique(FIDO, return_counts=True) numFIDOs = uniqueFIDOs.shape for i in np.arange(0,numFIDOs[0]): Lookup = FIDO==uniqueFIDOs[i] # Get average of color signals for this FIDO S_wb[Lookup] = np.sum(WBColor[Lookup])/unique_counts[i] S_rg[Lookup] = np.sum(RGColor[Lookup])/unique_counts[i] S_by[Lookup] = np.sum(BYColor[Lookup])/unique_counts[i] This takes about 7.89 seconds to run, no so long, but this will be included in another loop, so it builds up! I have tried vectorization (shown below) but I couldn't do it FIDOsize = unique_counts[0:numFIDOs[0]:1] Lookup = FIDO ==uniqueFIDOs[0:numFIDOs[0]:1] S_wb[Lookup] = np.sum(WBColor[Lookup])/FIDOsize S_rg[Lookup] = np.sum(RGColor[Lookup])/FIDOsize S_by[Lookup] = np.sum(BYColor[Lookup])/FIDOsize error in array size matching
By my timing, this is about 10 times faster than your original method. I tested with these arrays: import numpy as np sX=200 sY=200 FIDO = np.random.randint(0, sX*sY, (sX, sY)) WBColor = np.random.randint(0, sX*sY, (sX, sY)) RGColor = np.random.randint(0, sX*sY, (sX, sY)) BYColor = np.random.randint(0, sX*sY, (sX, sY)) This is the part I timed: import collections colors = {'wb': WBColor, 'rg': RGColor, 'by': BYColor} planes = colors.keys() S = {plane: np.zeros((sX, sY)) for plane in planes} for plane in planes: counts = collections.defaultdict(int) sums = collections.defaultdict(int) for (i, j), f in np.ndenumerate(FIDO): counts[f] += 1 sums[f] += colors[plane][i, j] for (i, j), f in np.ndenumerate(FIDO): S[plane][i, j] = sums[f]/counts[f] Probably because even though looping in Python is slow, this traverses the data less. Note, the original version gets faster if there are a small number of unique values in FIDO. This takes roughly the same time for most cases.
tensorflow -- is it or will it (sometime soon) be compatible with a windows workflow?
I haven't seen anything about Windows compatibility--is this on the way or currently available somwhere if I put forth some effort? (I have a mac and an ubuntu box but the windows machine is the one with the discrete graphics card that I currently use with theano)
We haven't tried to build TensorFlow on Windows so far: the only supported platforms are Linux (Ubuntu) and Mac OS X, and we've only built binaries for those platforms. For now, on Windows, the easiest way to get started with TensorFlow would be to use Docker: http://tensorflow.org/get_started/os_setup.md#docker-based_installation It should become easier to add Windows support when Bazel (the build system we are using) adds support for building on Windows, which is on the roadmap for Bazel 0.3. You can see the full Bazel roadmap here. In the meantime, you can follow issue 17 on the TensorFlow GitHub page.
How do I use distributed DNN training in TensorFlow?
Google released TensorFlow today. I have been poking around in the code, and I don't see anything in the code or API about training across a cluster of GPU servers. Does it have distributed training functionality yet?
Updated: The initial release of Distributed TensorFlow occurred on 2/26/2016. The release was announced by coauthor Derek Murray in the original issue here and uses gRPC for inter-process communication. Previous: A distributed implementation of TensorFlow has not been released yet. Support for a distributed implementation is the topic of this issue where coauthor Vijay Vasudevan wrote: we are working on making a distributed implementation available, it's currently not in the initial release and Jeff Dean provided an update: Our current internal distributed extensions are somewhat entangled with Google internal infrastructure, which is why we released the single-machine version first. The code is not yet in GitHub, because it has dependencies on other parts of the Google code base at the moment, most of which have been trimmed, but there are some remaining ones. We realize that distributed support is really important, and it's one of the top features we're prioritizing at the moment.
Where is the folder for Installing tensorflow with pip, Mac OSX?
just installed tensorflow using pip with the command: $ pip install tensorflow On the "Getting Started" for Tensorflow they have an example for convolutional neural networks $ python tensorflow/models/image/mnist/convolutional.py Where is that directory located when installing with pip?
Installing with pip, installs the packages to the directory "site-packages". The following code shows the location of tensorflow as well as where pip installs the packages: $ pip show tensorflow Which return: Metadata-Version: 2.0 Name: tensorflow Version: 0.5.0 Summary: TensorFlow helps the tensors flow Home-page: http://tensorflow.com/ Author: Google Inc. Author-email: opensource@google.com License: Apache 2.0 Location: /usr/local/lib/python2.7/site-packages Requires: six, numpy here Location: shows where the package is installed with $ cd /usr/local/lib/python2.7/site-packages/tensorflow
Fail to run word embedding example in tensorflow tutorial with GPUs
I am trying to run the word embedding example code at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/g3doc/tutorials/word2vec (installed with GPU version of tensorflow under Ubuntu 14.04), but it returns the following error message: Found and verified text8.zip Data size 17005207 Most common words (+UNK) [['UNK', 418391], ('the', 1061396), ('of', 593677), ('and', 416629), ('one', 411764)] Sample data [5239, 3084, 12, 6, 195, 2, 3137, 46, 59, 156] 3084 -> 12 originated -> as 3084 -> 5239 originated -> anarchism 12 -> 3084 as -> originated 12 -> 6 as -> a 6 -> 12 a -> as 6 -> 195 a -> term 195 -> 6 term -> a 195 -> 2 term -> of I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 12 I tensorflow/core/common_runtime/gpu/gpu_init.cc:88] Found device 0 with properties: name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate (GHz) 1.076 pciBusID 0000:03:00.0 Total memory: 12.00GiB Free memory: 443.32MiB I tensorflow/core/common_runtime/gpu/gpu_init.cc:88] Found device 1 with properties: name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate (GHz) 1.076 pciBusID 0000:05:00.0 Total memory: 12.00GiB Free memory: 451.61MiB I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA: 0 1 I tensorflow/core/common_runtime/gpu/gpu_init.cc:122] 0: Y Y I tensorflow/core/common_runtime/gpu/gpu_init.cc:122] 1: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:643] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:03:00.0) I tensorflow/core/common_runtime/gpu/gpu_device.cc:643] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX TITAN X, pci bus id: 0000:05:00.0) I tensorflow/core/common_runtime/gpu/gpu_region_allocator.cc:47] Setting region size to 254881792 I tensorflow/core/common_runtime/gpu/gpu_region_allocator.cc:47] Setting region size to 263835648 I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 12 Initialized Traceback (most recent call last): File "word2vec_basic.py", line 171, in <module> _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 345, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 419, in _do_run e.code) tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'GradientDescent/update_Variable_2/ScatterSub': Could not satisfy explicit device specification '' because the node was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/GPU:0' [[Node: GradientDescent/update_Variable_2/ScatterSub = ScatterSub[T=DT_FLOAT, Tindices=DT_INT64, use_locking=false](Variable_2, gradients/concat_1, GradientDescent/update_Variable_2/mul)]] Caused by op u'GradientDescent/update_Variable_2/ScatterSub', defined at: File "word2vec_basic.py", line 145, in <module> optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 167, in minimize name=name) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 256, in apply_gradients update_ops.append(self._apply_sparse(grad, var)) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/training/gradient_descent.py", line 40, in _apply_sparse return var.scatter_sub(delta, use_locking=self._use_locking) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 324, in scatter_sub use_locking=use_locking) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 227, in scatter_sub name=name) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op op_def=op_def) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__ self._traceback = _extract_stack() When I run the code in CPU version tensorflow, it works just fine. But not for GPU version. I also tried to use tf.device('/cpu:0') to force it using CUP instead of GPU, but it produces the same output. Is there any function in this example cannot be run in GPUs? And how do I switch to CPU without reinstalling CPU version of tensorflow since tf.device('/cpu:0') not working?
It seems a whole bunch of operations used in this example aren't supported on a GPU. A quick workaround is to restrict operations such that only matrix muls are ran on the GPU. There's an example in the docs: http://tensorflow.org/api_docs/python/framework.md See the section on tf.Graph.device(device_name_or_function) I was able to get it working with the following: def device_for_node(n): if n.type == "MatMul": return "/gpu:0" else: return "/cpu:0" with graph.as_default(): with graph.device(device_for_node): ...
Anaconda3 2.4 with python 3.5 installation error (procedure entry not found; Windows 10)
I have just made up my mind to change from python 2.7 to python 3.5 and therefore tried to reinstall Anaconda (64 bit) with the 3.5 environment. When I try to install the package I get several errors in the form of (translation from German, so maybe not exact): The procedure entry "__telemetry_main_return_trigger" could not be found in the DLL "C:\Anaconda3\pythonw.exe". and The procedure entry "__telemetry_main_invoke_trigger" could not be found in the DLL "C:\Anaconda3\python35.dll". The title of the second error message box still points to pythonw.exe. Both errors appear several times - every time an extraction was completed. The installation progress box reads [...] extraction complete. Execute: "C:\Anaconda3\pythonw.exe" "C:\Anaconda3\Lib_nsis.py" postpkg After torturing myself through the installation I get the warning Failed to create Anaconda menus If I ignore it once gives me my lovely error messages and tells me that Failed to initialize Anaconda directories then Failed to add Anaconda to the system PATH Of course nothing works, if I dare to use this mess it installs. What might go wrong? On other computers with Windows 10 it works well. P.S.: An installation of Anaconda2 2.4 with python 2.7 works without any error message, but still is not able to be used (other errors).
Finally I have found the reason. So, if anybody else has this problem: Here the entry points are an issue as well and Michael Sarahan gives the solution. Install the Visual C++ Redistributable for Visual Studio 2015, which is used by the new version of python, first. After that install the Anaconda-package and it should work like a charm.
How to print the value of a Tensor object in TensorFlow?
I have been using the introductory example of matrix multiplication in TensorFlow. matrix1 = tf.constant([[3., 3.]]) matrix2 = tf.constant([[2.],[2.]]) product = tf.matmul(matrix1, matrix2) And when I print the product, it is displaying it as a TensorObject(obviously). product <tensorflow.python.framework.ops.Tensor object at 0x10470fcd0> But how do I know the value of product? The following doesn't help: print product Tensor("MatMul:0", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32) I know that graphs run on Sessions, but isn't there any way I can check the output of a TensorObject without running the graph in a session?
The easiest* way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a with tf.Session(): block, or see below). In general,** you cannot print the value of a tensor without running some code in a session. If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a Session object everywhere. This might seem silly for such a small expression, but one of the key ideas in Tensorflow is deferred execution: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a Session) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs). *  To print the value of a tensor without returning it to your Python program, you can use the tf.Print() op, as And suggests in another answer. Note that you still need to run part of the graph to see the output of this op, which is printed to standard output. If you're running distributed TensorFlow, the tf.Print() op will print its output to the standard output of the task where that op runs. **  You might be able to use the experimental tf.contrib.util.constant_value() function to get the value of a constant tensor, but it isn't intended for general use, and it isn't defined for many operators.
why is "any()" running slower than using loops?
I've been working in a project that manage big lists and pass the lists trough a lot of tests in order to validate or not each word of the list. The funny thing is that each time that I've used the "faster" tools or generators (like the itertools module) and I make some tests, they seem to be slower. Finally I decided to ask the question because it is possible that I be doing something wrong. The following code will try to test the performance of the any() function vs loops. #!/usr/bin/python3 # import time from unicodedata import normalize """ Import a large list of tests (like 300Mb). The list contains words each one separated in a line. """ PATH='./tests' start=time.time() with open(PATH, encoding='utf-8', mode='rt') as f: tests_list=f.read() print('File reading done in {} seconds'.format(time.time() - start)) start=time.time() tests_list=[line.strip() for line in normalize('NFC',tests_list).splitlines()] print('String formalization, and list strip done in {} seconds'.format(time.time()-start)) print('{} strings'.format(len(tests_list))) """ test to check if "any()" is faster. """ print('Testing the performance of any()') unallowed_combinations=['ab','ac','ad','ae','af','ag','ah','ai','af','ax','ae','rt','rz','bt','du','iz','ip','uy','io','ik','il','iw','ww','wp'] def combination_is_valid(string): if any(combination in string for combination in unallowed_combinations): return False return True def combination_is_valid2(string): for combination in unallowed_combinations: if combination in string: return False return True start=time.time() for string in tests_list: combination_is_valid(string) print('combination_is_valid ended in {} seconds'.format(time.time()-start)) start=time.time() for string in tests_list: combination_is_valid2(string) print('combination_is_valid2 ended in {} seconds'.format(time.time()-start)) The code I posted is pretty representative of the kind of tests I do with my program. And if we take a look to the results: File reading done in 0.22988605499267578 seconds String formalization, and list strip done in 6.803032875061035 seconds 38709922 strings Testing the performance of any() combination_is_valid ended in 80.74802565574646 seconds combination_is_valid2 ended in 41.69514226913452 seconds File reading done in 0.24268722534179688 seconds String formalization, and list strip done in 6.720442771911621 seconds 38709922 strings Testing the performance of any() combination_is_valid ended in 79.05265760421753 seconds combination_is_valid2 ended in 42.24800777435303 seconds I find kinda amazing that using the function with loops its half faster than using any(). What would be the explanation to this? Am I doing something wrong? I interpret the results as if any() generated a list of all the results and check if they are True or False at the end. In the other hand the function that uses loops, after a result is created it is tested so it can exit earlier without generating UN-usefull tests. What do you think? (I used python3.4 under GNU-Linux)
Actually the any() function is equal to following function : def any(iterable): for element in iterable: if element: return True return False which is like your second function, but since the any() returns a boolean value by itself, you don't need to check for the result and then return a new value, So the difference of performance is because of that you are actually use a redundant return and if conditions,also calling the any inside another function. So the advantage of any here is that you don't need to wrap it with another function because it does all the things for you. Also as @interjay mentioned in comment it seems that the most important reason which I missed is that you are passing a generator expression to any() which doesn't provide the results at once and since it produce the result on demand it does an extra job. Based on PEP 0289 -- Generator Expressions The semantics of a generator expression are equivalent to creating an anonymous generator function and calling it. For example: g = (x**2 for x in range(10)) print g.next() is equivalent to: def __gen(exp): for x in exp: yield x**2 g = __gen(iter(range(10))) print g.next() So as you can see each time that python want to access the next item it calls the iter function and the next method of a generator.And finally the result is that it's overkill to use any() in such cases.
How to do Xavier initialization on TensorFlow
I'm porting my Caffe network over to TensorFlow but it doesn't seem to have xavier initialization. I'm using truncated_normal but this seems to be making it a lot harder to train.
Now TensorFlow 0.8 has the xavier initializer implementation. https://www.tensorflow.org/versions/r0.8/api_docs/python/contrib.layers.html#xavier_initializer You can use something like this: W = tf.get_variable("W", shape=[784, 256], initializer=tf.contrib.layers.xavier_initializer())
Why does TensorFlow example fail when increasing batch size?
I was looking at the Tensorflow MNIST example for beginners and found that in this part: for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) changing the batch size from 100 to be above 204 causes the model to fail to converge. It works up to 204, but at 205 and any higher number I tried, the accuracy would end up < 10%. Is this a bug, something about the algorithm, something else? This is running their binary installation for OS X, seems to be version 0.5.0.
You're using the very basic linear model in the beginners example? Here's a trick to debug it - watch the cross-entropy as you increase the batch size (the first line is from the example, the second I just added): cross_entropy = -tf.reduce_sum(y_*tf.log(y)) cross_entropy = tf.Print(cross_entropy, [cross_entropy], "CrossE") At a batch size of 204, you'll see: I tensorflow/core/kernels/logging_ops.cc:64] CrossE[92.37558] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[90.107414] But at 205, you'll see a sequence like this, from the start: I tensorflow/core/kernels/logging_ops.cc:64] CrossE[472.02966] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[475.11697] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1418.6655] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1546.3833] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1684.2932] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1420.02] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1796.0872] I tensorflow/core/kernels/logging_ops.cc:64] CrossE[nan] Ack - NaN's showing up. Basically, the large batch size is creating such a huge gradient that your model is spiraling out of control -- the updates it's applying are too large, and overshooting the direction it should go by a huge margin. In practice, there are a few ways to fix this. You could reduce the learning rate from .01 to, say, .005, which results in a final accuracy of 0.92. train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy) Or you could use a more sophisticated optimization algorithm (Adam, Momentum, etc.) that tries to do more to figure out the direction of the gradient. Or you could use a more complex model that has more free parameters across which to disperse that big gradient.
Tensorflow image reading & display
I've got a bunch of images in a format similar to Cifar10 (binary file, size = 96*96*3 bytes per image), one image after another (STL-10 dataset). The file I'm opening has 138MB. I tried to read & check the contents of the Tensors containing the images to be sure that the reading is done right, however I have two questions - Does the FixedLengthRecordReader load the whole file, however just provide inputs one at a time? Since reading the first size bytes should be relatively fast. However, the code takes about two minutes to run. How to get the actual image contents in a displayable format, or display them internally to validate that the images are read well? I did sess.run(uint8image), however the result is empty. The code is below: import tensorflow as tf def read_stl10(filename_queue): class STL10Record(object): pass result = STL10Record() result.height = 96 result.width = 96 result.depth = 3 image_bytes = result.height * result.width * result.depth record_bytes = image_bytes reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) result.key, value = reader.read(filename_queue) print value record_bytes = tf.decode_raw(value, tf.uint8) depth_major = tf.reshape(tf.slice(record_bytes, [0], [image_bytes]), [result.depth, result.height, result.width]) result.uint8image = tf.transpose(depth_major, [1, 2, 0]) return result # probably a hack since I should've provided a string tensor filename_queue = tf.train.string_input_producer(['./data/train_X']) image = read_stl10(filename_queue) print image.uint8image with tf.Session() as sess: result = sess.run(image.uint8image) print result, type(result) Output: Tensor("ReaderRead:1", shape=TensorShape([]), dtype=string) Tensor("transpose:0", shape=TensorShape([Dimension(96), Dimension(96), Dimension(3)]), dtype=uint8) I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 4 I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 4 [empty line for last print] Process finished with exit code 137 I'm running this on my CPU, if that adds anything. EDIT: I found the pure TensorFlow solution thanks to Rosa. Apparently, when using the string_input_producer, in order to see the results, you need to initialize the queue runners. The only required thing to add to the code above is the second line from below: ... with tf.Session() as sess: tf.train.start_queue_runners(sess=sess) ... Afterwards, the image in the result can be displayed with matplotlib.pyplot.imshow(result). I hope this helps someone. If you have any further questions, feel free to ask me or check the link in Rosa's answer.
Just to give a complete answer: filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) # list of files to read reader = tf.WholeFileReader() key, value = reader.read(filename_queue) my_img = tf.image.decode_png(value) # use png or jpg decoder based on your files. init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(1): #length of your filename list image = my_img.eval() #here is your image Tensor :) print(image.shape) Image.show(Image.fromarray(np.asarray(image))) coord.request_stop() coord.join(threads) Or if you have a directory of images you can add them all via this Github source file @mttk and @salvador-dali: I hope it is what you need
Error while importing Tensorflow in python2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found'
I have installed the Tensorflow bindings with python successfully. But when I try to import Tensorflow, I get the follwoing error. ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.17' not found (required by /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so) I have tried to update GLIBC_2.15 to 2.17, but no luck.
Okay so here is the other solution I mentionned in my previous answer, it's more tricky, but should always work on systems with GLIBC>=2.12 and GLIBCXX>=3.4.13. In my case it was on a CentOS 6.7, but it's also fine for Ubuntu 12.04. We're going to need a version of gcc that supports c++11, either on another machine or an isolated install; but not for the moment. What we're gonna do here is edit the _pywrap_tensorflow.so binary in order to 'weakify' its libc and libstdc++ dependencies, so that ld accepts to link the stubs we're gonna make. Then we'll make those stubs for the missing symbols, and finally we're gonna pre-load all of this when running python. First of all, I want to thank James for his great article ( http://www.lightofdawn.org/wiki/wiki.cgi/NewAppsOnOldGlibc ) and precious advices, I couldn't have made it without him. So let's start by weakifying the dependencies, it's just about replacing the right bytes in _pywrap_tensorflow.so. Please note that this step only works for the current version of tensorflow (0.6.0). So if its not done already create and activate your virtualenv if you have one (if you're not admin virtualenv is a solution, another is to add --user flag to pip command), and install tensorflow 0.6.0 (replace cpu by gpu in the url if you want the gpu version) : pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl And let's weakify all the annoying dependencies, here is the command for the cpu version of tensorflow: TENSORFLOW_DIR=`python -c "import imp; print(imp.find_module('tensorflow')[1])"` for addr in 0xC6A93C 0xC6A99C 0xC6A9EC 0xC6AA0C 0xC6AA1C 0xC6AA3C; do printf '\x02' | dd conv=notrunc of=${TENSORFLOW_DIR}/python/_pywrap_tensorflow.so bs=1 seek=$((addr)) ; done And here is the gpu one (run only the correct one or you'll corrupt the binary): TENSORFLOW_DIR=`python -c "import imp; print(imp.find_module('tensorflow')[1])"` for addr in 0xDC5EA4 0xDC5F04 0xDC5F54 0xDC5F74 0xDC5F84 0xDC5FA4; do printf '\x02' | dd conv=notrunc of=${TENSORFLOW_DIR}/python/_pywrap_tensorflow.so bs=1 seek=$((addr)) ; done You can check it with: readelf -V ${TENSORFLOW_DIR}/python/_pywrap_tensorflow.so Have a look at the article if you want to understand what's going on here. Now we're gonna make the stubs for the missing libc symbols: mkdir ~/my_stubs cd ~/my_stubs MYSTUBS=~/my_stubs printf "#include <time.h>\n#include <string.h>\nvoid* memcpy(void *dest, const void *src, size_t n) {\nreturn memmove(dest, src, n);\n}\nint clock_gettime(clockid_t clk_id, struct timespec *tp) {\nreturn clock_gettime(clk_id, tp);\n}" > mylibc.c gcc -s -shared -o mylibc.so -fPIC -fno-builtin mylibc.c You need to perform that step on the machine with the missing dependencies (or machine with similar versions of standard libraries (in a cluster for example)). Now we're gonna probably change of machine since we need a gcc that supports c++11, and it is probably not on the machine that lacks all the dependencies (or you can use an isolated install of a recent gcc). In the following I assume we're still in ~/my_stubs and somehow you share your home accross the machines, otherwise you'll just have to copy the .so files we're gonna generate when it's done. So, one stub that we can do for libstdc++, and for the remaining missing ones we're going to compile them from gcc source (it might take some time to clone the repository): printf "#include <functional>\nvoid std::__throw_bad_function_call(void) {\nexit(1);\n}" > bad_function.cc gcc -std=c++11 -s -shared -o bad_function.so -fPIC -fno-builtin bad_function.cc git clone https://github.com/gcc-mirror/gcc.git cd gcc mkdir my_include mkdir my_include/ext cp libstdc++-v3/include/ext/aligned_buffer.h my_include/ext gcc -I$PWD/my_include -std=c++11 -fpermissive -s -shared -o $MYSTUBS/hashtable.so -fPIC -fno-builtin libstdc++-v3/src/c++11/hashtable_c++0x.cc gcc -std=c++11 -fpermissive -s -shared -o $MYSTUBS/chrono.so -fPIC -fno-builtin libstdc++-v3/src/c++11/chrono.cc gcc -std=c++11 -fpermissive -s -shared -o $MYSTUBS/random.so -fPIC -fno-builtin libstdc++-v3/src/c++11/random.cc gcc -std=c++11 -fpermissive -s -shared -o $MYSTUBS/hash_bytes.so -fPIC -fno-builtin ./libstdc++-v3/libsupc++/hash_bytes.cc And that's it! You can now run a tensorflow python script by preloading all our shared libraries (and your local libstdc++): LIBSTDCPP=`ldconfig -p | grep libstdc++.so.6 | grep 64 | cut -d' ' -f4` #For 64bit machines LD_PRELOAD="$MYSTUBS/mylibc.so:$MYSTUBS/random.so:$MYSTUBS/hash_bytes.so:$MYSTUBS/chrono.so:$MYSTUBS/hashtable.so:$MYSTUBS/bad_function.so:$LIBSTDCPP" python ${TENSORFLOW_DIR}/models/image/mnist/convolutional.py :)
Randomly change the prompt in the Python interpreter
It's kind of boring to always see the >>> prompt in Python. What would be the best way to go about randomly changing the prompt prefix? I imagine an interaction like: This is a tobbaconist!>> import sys Sorry?>> import math Sorry?>> print sys.ps1 Sorry? What?>>
According to the docs, if you assign a non-string object to sys.ps1 then it will evaluate the str function of it each time: If a non-string object is assigned to either variable, its str() is re-evaluated each time the interpreter prepares to read a new interactive command; this can be used to implement a dynamic prompt. Well now it's obvious, you should make it dynamic! Make an object with a __str__ method where you can place any logic you want: class Prompt: def __str__(self): # Logic to randomly determine string return string You can also make changes or insert things into this class as you go too. So for example, you could have a list of messages in Prompt that you append to, or change, and that will affect the console message.
Use attribute and target matrices for TensorFlow Linear Regression Python
I'm trying to follow this tutorial. TensorFlow just came out and I'm really trying to understand it. I'm familiar with penalized linear regression like Lasso, Ridge, and ElasticNet and its usage in scikit-learn. For scikit-learn Lasso regression, all I need to input into the regression algorithm is DF_X [an M x N dimensional attribute matrix (pd.DataFrame)] and SR_y [an M dimensional target vector (pd.Series)]. The Variable structure in TensorFlow is a bit new to me and I'm not sure how to structure my input data into what it wants. It seems as if softmax regression is for classification. How can I restructure my DF_X (M x N attribute matrix) and SR_y (M dimensional target vector) to input into tensorflow for linear regression? My current method for doing a Linear Regression uses pandas, numpy, and sklearn and it's shown below. I think this question will be really helpful for people getting familiar with TensorFlow: #!/usr/bin/python import pandas as pd import numpy as np import tensorflow as tf from sklearn.linear_model import LassoCV #Create DataFrames for attribute and target matrices DF_X = pd.DataFrame(np.array([[0,0,1],[2,3,1],[4,5,1],[3,4,1]]),columns=["att1","att2","att3"],index=["s1","s2","s3","s4"]) SR_y = pd.Series(np.array([3,2,5,8]),index=["s1","s2","s3","s4"],name="target") print DF_X #att1 att2 att3 #s1 0 0 1 #s2 2 3 1 #s3 4 5 1 #s4 3 4 1 print SR_y #s1 3 #s2 2 #s3 5 #s4 8 #Name: target, dtype: int64 #Create Linear Model (Lasso Regression) model = LassoCV() model.fit(DF_X,SR_y) print model #LassoCV(alphas=None, copy_X=True, cv=None, eps=0.001, fit_intercept=True, #max_iter=1000, n_alphas=100, n_jobs=1, normalize=False, positive=False, #precompute='auto', random_state=None, selection='cyclic', tol=0.0001, #verbose=False) print model.coef_ #[ 0. 0.3833346 0. ]
Softmax is an only addition function (in logistic regression for example), it is not a model like model = LassoCV() model.fit(DF_X,SR_y) Therefore you can't simply give it data with fit method. However, you can simply create your model with the help of TensorFlow functions. First of all, you have to create a computational graph, for example for linear regression you will create tensors with the size of your data. They are only tensors and you will give them your array in another part of the program. import tensorflow as tf x = tf.placeholder("float", [4, 3]) y_ = tf.placeholder("float",[4]) When you create two variables, that will contain initial weights of our model W = tf.Variable(tf.zeros([3,1])) b = tf.Variable(tf.zeros([1])) And now you can create the model (you want to create regression, not classification therefore you don't need to use tf.nn.softmax ) y=tf.matmul(x,W) + b As you have regression and linear model you will use loss=tf.reduce_sum(tf.square(y_ - y)) Then we will train our model with the same step as in the tutorial train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss) Now that you created the computational graph you have to write one more part of the program, where you will use this graph to work with your data. init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) sess.run(train_step, feed_dict={x:np.asarray(DF_X),y_:np.asarray(SR_y)}) Here you give your data to this computational graph with the help of feed_dict. In TensorFlow you provide information in numpy arrays. If you want to see your mistake you can write sess.run(loss,feed_dict={x:np.asarray(DF_X),y_:np.asarray(SR_y)})
Converting large XML file to relational database
I'm trying to figure out the best way to accomplish the following: Download a large XML (1GB) file on daily basis from a third-party website Convert that XML file to relational database on my server Add functionality to search the database For the first part, is this something that would need to be done manually, or could it be accomplished with a cron? Most of the questions and answers related to XML and relational databases refer to Python or PHP. Could this be done with javascript/nodejs as well? If this question is better suited for a different StackExchange forum, please let me know and I will move it there instead. Below is a sample of the xml code: <case-file> <serial-number>123456789</serial-number> <transaction-date>20150101</transaction-date> <case-file-header> <filing-date>20140101</filing-date> </case-file-header> <case-file-statements> <case-file-statement> <code>AQ123</code> <text>Case file statement text</text> </case-file-statement> <case-file-statement> <code>BC345</code> <text>Case file statement text</text> </case-file-statement> </case-file-statements> <classifications> <classification> <international-code-total-no>1</international-code-total-no> <primary-code>025</primary-code> </classification> </classifications> </case-file> Here's some more information about how these files will be used: All XML files will be in the same format. There are probably a few dozen elements within each record. The files are updated by a third party on a daily basis (and are available as zipped files on the third-party website). Each day's file represents new case files as well as updated case files. The goal is to allow a user to search for information and organize those search results on the page (or in a generated pdf/excel file). For example, a user might want to see all case files that include a particular word within the <text> element. Or a user might want to see all case files that include primary code 025 (<primary-code> element) and that were filed after a particular date (<filing-date> element). The only data entered into the database will be from the XML files--users won't be adding any of their own information to the database.
All steps could certainly be accomplished using node.js. There are modules available that will help you with each of these tasks: node-cron: lets you easily set up cron tasks in your node program. Another option would be to set up a cron task on your operating system (lots of resources available for your favourite OS). download: module to easily download files from a URL. xml-stream: allows you to stream a file and register events that fire when the parser encounters certain XML elements. I have successfully used this module to parse KML files (granted they were significantly smaller than your files). node-postgres: node client for PostgreSQL (I am sure there are clients for many other common RDBMS, PG is the only one I have used so far). Most of these modules have pretty great examples that will get you started. Here's how you would probably set up the XML streaming part: var XmlStream = require('xml-stream'); var xml = fs.createReadStream('path/to/file/on/disk'); // or stream directly from your online source var xmlStream = new XmlStream(xml); xmlStream.on('endElement case-file', function(element) { // create and execute SQL query/queries here for this element }); xmlStream.on('end', function() { // done reading elements // do further processing / query database, etc. });
Why is this TensorFlow implementation vastly less successful than Matlab's NN?
As a toy example I'm trying to fit a function f(x) = 1/x from 100 no-noise data points. The matlab default implementation is phenomenally successful with mean square difference ~10^-10, and interpolates perfectly. I implement a neural network with one hidden layer of 10 sigmoid neurons. I'm a beginner at neural networks so be on your guard against dumb code. import tensorflow as tf import numpy as np def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) #Can't make tensorflow consume ordinary lists unless they're parsed to ndarray def toNd(lst): lgt = len(lst) x = np.zeros((1, lgt), dtype='float32') for i in range(0, lgt): x[0,i] = lst[i] return x xBasic = np.linspace(0.2, 0.8, 101) xTrain = toNd(xBasic) yTrain = toNd(map(lambda x: 1/x, xBasic)) x = tf.placeholder("float", [1,None]) hiddenDim = 10 b = bias_variable([hiddenDim,1]) W = weight_variable([hiddenDim, 1]) b2 = bias_variable([1]) W2 = weight_variable([1, hiddenDim]) hidden = tf.nn.sigmoid(tf.matmul(W, x) + b) y = tf.matmul(W2, hidden) + b2 # Minimize the squared errors. loss = tf.reduce_mean(tf.square(y - yTrain)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # For initializing the variables. init = tf.initialize_all_variables() # Launch the graph sess = tf.Session() sess.run(init) for step in xrange(0, 4001): train.run({x: xTrain}, sess) if step % 500 == 0: print loss.eval({x: xTrain}, sess) Mean square difference ends at ~2*10^-3, so about 7 orders of magnitude worse than matlab. Visualising with xTest = np.linspace(0.2, 0.8, 1001) yTest = y.eval({x:toNd(xTest)}, sess) import matplotlib.pyplot as plt plt.plot(xTest,yTest.transpose().tolist()) plt.plot(xTest,map(lambda x: 1/x, xTest)) plt.show() we can see the fit is systematically imperfect: while the matlab one looks perfect to the naked eye with the differences uniformly < 10^-5: I have tried to replicate with TensorFlow the diagram of the Matlab network: Incidentally, the diagram seems to imply a tanh rather than sigmoid activation function. I cannot find it anywhere in documentation to be sure. However, when I try to use a tanh neuron in TensorFlow the fitting quickly fails with nan for variables. I do not know why. Matlab uses Levenberg–Marquardt training algorithm. Bayesian regularization is even more successful with mean squares at 10^-12 (we are probably in the area of vapours of float arithmetic). Why is TensorFlow implementation so much worse, and what can I do to make it better?
I tried training for 50000 iterations it got to 0.00012 error. It takes about 180 seconds on Tesla K40. It seems that for this kind of problem, first order gradient descent is not a good fit (pun intended), and you need Levenberg–Marquardt or l-BFGS. I don't think anyone implemented them in TensorFlow yet. Edit Use tf.train.AdamOptimizer(0.1) for this problem. It gets to 3.13729e-05 after 4000 iterations. Also, GPU with default strategy also seems like a bad idea for this problem. There are many small operations and the overhead causes GPU version to run 3x slower than CPU on my machine.
How can numpy be so much faster than my Fortran routine?
I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up with the following very easy routine. integer gridsize,unit,j real mini,maxi double precision mean gridsize=512 unit=40 open(unit=unit,file='T.out',status='old',access='stream',& form='unformatted',action='read') read(unit=unit) tmp mini=tmp maxi=tmp mean=tmp do j=2,gridsize**3 read(unit=unit) tmp if(tmp>maxi)then maxi=tmp elseif(tmp<mini)then mini=tmp end if mean=mean+tmp end do mean=mean/gridsize**3 close(unit=unit) This takes about 25 seconds per file on the machine I use. That struck me as being rather long and so I went ahead and did the following in Python: import numpy mmap=numpy.memmap('T.out',dtype='float32',mode='r',offset=4,\ shape=(512,512,512),order='F') mini=numpy.amin(mmap) maxi=numpy.amax(mmap) mean=numpy.mean(mmap) Now, I expected this to be faster of course, but I was really blown away. It takes less than a second under identical conditions. The mean deviates from the one my Fortran routine finds (which I also ran with 128-bit floats, so I somehow trust it more) but only on the 7th significant digit or so. How can numpy be so fast? I mean you have to look at every entry of an array to find these values, right? Am I doing something very stupid in my Fortran routine for it to take so much longer? EDIT: To answer the questions in the comments: Yes, also I ran the Fortran routine with 32-bit and 64-bit floats but it had no impact on performance. I used iso_fortran_env which provides 128-bit floats. Using 32-bit floats my mean is off quite a bit though, so precision is really an issue. I ran both routines on different files in different order, so the caching should have been fair in the comparison I guess ? I actually tried open MP, but to read from the file at different positions at the same time. Having read your comments and answers this sounds really stupid now and it made the routine take a lot longer as well. I might give it a try on the array operations but maybe that won't even be necessary. The files are actually 1/2G in size, that was a typo, Thanks. I will try the array implementation now. EDIT 2: I implemented what @Alexander Vogt and @casey suggested in their answers, and it is as fast as numpy but now I have a precision problem as @Luaan pointed out I might get. Using a 32-bit float array the mean computed by sum is 20% off. Doing ... real,allocatable :: tmp (:,:,:) double precision,allocatable :: tmp2(:,:,:) ... tmp2=tmp mean=sum(tmp2)/size(tmp) ... Solves the issue but increases computing time (not by very much, but noticeably). Is there a better way to get around this issue? I couldn't find a way to read singles from the file directly to doubles. And how does numpy avoid this? Thanks for all the help so far.
Your Fortran implementation suffers two major shortcomings: You mix IO and computations (and read from the file entry by entry). You don't use vector/matrix operations. This implementation does perform the same operation as yours and is faster by a factor of 20 on my machine: program test integer gridsize,unit real mini,maxi,mean real, allocatable :: tmp (:,:,:) gridsize=512 unit=40 allocate( tmp(gridsize, gridsize, gridsize)) open(unit=unit,file='T.out',status='old',access='stream',& form='unformatted',action='read') read(unit=unit) tmp close(unit=unit) mini = minval(tmp) maxi = maxval(tmp) mean = sum(tmp)/gridsize**3 print *, mini, maxi, mean end program The idea is to read in the whole file into one array tmp in one go. Then, I can use the functions MAXVAL, MINVAL, and SUM on the array directly. For the accuracy issue: Simply using double precision values and doing the conversion on the fly as mean = sum(real(tmp, kind=kind(1.d0)))/real(gridsize**3, kind=kind(1.d0)) only marginally increases the calculation time. I tried performing the operation element-wise and in slices, but that did only increase the required time at the default optimization level. At -O3, the element-wise addition performs ~3 % better than the array operation. The difference between double and single precision operations is less than 2% on my machine - on average (the individual runs deviate by far more). Here is a very fast implementation using LAPACK: program test integer gridsize,unit, i, j real mini,maxi integer :: t1, t2, rate real, allocatable :: tmp (:,:,:) real, allocatable :: work(:) ! double precision :: mean real :: mean real :: slange call system_clock(count_rate=rate) call system_clock(t1) gridsize=512 unit=40 allocate( tmp(gridsize, gridsize, gridsize), work(gridsize)) open(unit=unit,file='T.out',status='old',access='stream',& form='unformatted',action='read') read(unit=unit) tmp close(unit=unit) mini = minval(tmp) maxi = maxval(tmp) ! mean = sum(tmp)/gridsize**3 ! mean = sum(real(tmp, kind=kind(1.d0)))/real(gridsize**3, kind=kind(1.d0)) mean = 0.d0 do j=1,gridsize do i=1,gridsize mean = mean + slange('1', gridsize, 1, tmp(:,i,j),gridsize, work) enddo !i enddo !j mean = mean / gridsize**3 print *, mini, maxi, mean call system_clock(t2) print *,real(t2-t1)/real(rate) end program This uses the single precision matrix 1-norm SLANGE on matrix columns. The run-time is even faster than the approach using single precision array functions - and does not show the precision issue.
sampling multinomial from small log probability vectors in numpy/scipy
Is there a function in numpy/scipy that lets you sample multinomial from a vector of small log probabilities, without losing precision? example: # sample element randomly from these log probabilities l = [-900, -1680] the naive method fails because of underflow: import scipy import numpy as np # this makes a all zeroes a = np.exp(l) / scipy.misc.logsumexp(l) r = np.random.multinomial(1, a) this is one attempt: def s(l): m = np.max(l) norm = m + np.log(np.sum(np.exp(l - m))) p = np.exp(l - norm) return np.where(np.random.multinomial(1, p) == 1)[0][0] is this the best/fastest method and can np.exp() in the last step be avoided?
First of all, I believe the problem you're encountering is because you're normalizing your probabilities incorrectly. This line is incorrect: a = np.exp(l) / scipy.misc.logsumexp(l) You're dividing a probability by a log probability, which makes no sense. Instead you probably want a = np.exp(l - scipy.misc.logsumexp(l)) If you do that, you find a = [1, 0] and your multinomial sampler works as expected up to floating point precision in the second probability. A Solution for Small N: Histograms That said, if you still need more precision and performance is not as much of a concern, one way you could make progress is by implementing a multinomial sampler from scratch, and then modifying this to work at higher precision. NumPy's multinomial function is implemented in Cython, and essentially performs a loop over a number of binomial samples and combines them into a multinomial sample. and you can call it like this: np.random.multinomial(10, [0.1, 0.2, 0.7]) # [0, 1, 9] (Note that the precise output values here & below are random, and will change from call to call). Another way you might implement a multinomial sampler is to generate N uniform random values, then compute the histogram with bins defined by the cumulative probabilities: def multinomial(N, p): rand = np.random.uniform(size=N) p_cuml = np.cumsum(np.hstack([[0], p])) p_cuml /= p_cuml[-1] return np.histogram(rand, bins=p_cuml)[0] multinomial(10, [0.1, 0.2, 0.7]) # [1, 1, 8] With this method in mind, we can think about doing things to higher precision by keeping everything in log-space. The main trick is to realize that the log of uniform random deviates is equivalent to the negative of exponential random deviates, and so you can do everything above without ever leaving log space: def multinomial_log(N, logp): log_rand = -np.random.exponential(size=N) logp_cuml = np.logaddexp.accumulate(np.hstack([[-np.inf], logp])) logp_cuml -= logp_cuml[-1] return np.histogram(log_rand, bins=logp_cuml)[0] multinomial_log(10, np.log([0.1, 0.2, 0.7])) # [1, 2, 7] The resulting multinomial draws will maintain precision even for very small values in the p array. Unfortunately, these histogram-based solutions will be much slower than the native numpy.multinomial function, so if performance is an issue you may need another approach. One option would be to adapt the Cython code linked above to work in log-space, using similar mathematical tricks as I used here. A Solution for Large N: Poisson Approximation The problem with the above solution is that as N grows large, it becomes very slow. I was thinking about this and realized there's a more efficient way forward, despite np.random.multinomial failing for probabilities smaller than 1E-16 or so. Here's an example of that failure: on a 64-bit machine, this will always give zero for the first entry because of the way the code is implemented, when in reality it should give something near 10: np.random.multinomial(1E18, [1E-17, 1]) # array([ 0, 1000000000000000000]) If you dig into the source, you can trace this issue to the binomial function upon which the multinomial function is built. The cython code internally does something like this: def multinomial_basic(N, p, size=None): results = np.array([np.random.binomial(N, pi, size) for pi in p]) results[-1] = int(N) - results[:-1].sum(0) return np.rollaxis(results, 0, results.ndim) multinomial_basic(1E18, [1E-17, 1]) # array([ 0, 1000000000000000000]) The problem is that the binomial function chokes on very small values of p – this is because the algorithm computes the value (1 - p), so the value of p is limited by floating-point precision. So what can we do? Well, it turns out that for small values of p, the Poisson distribution is an extremely good approximation of the binomial distribution, and the implementation doesn't have these issues. So we can build a robust multinomial function based on a robust binomial sampler that switches to a Poisson sampler at small p: def binomial_robust(N, p, size=None): if p < 1E-7: return np.random.poisson(N * p, size) else: return np.random.binomial(N, p, size) def multinomial_robust(N, p, size=None): results = np.array([binomial_robust(N, pi, size) for pi in p]) results[-1] = int(N) - results[:-1].sum(0) return np.rollaxis(results, 0, results.ndim) multinomial_robust(1E18, [1E-17, 1]) array([ 12, 999999999999999988]) The first entry is nonzero and near 10 as expected! Note that we can't use N larger than 1E18, because it will overflow the long integer. But we can confirm that our approach works for smaller probabilities using the size parameter, and averaging over results: p = [1E-23, 1E-22, 1E-21, 1E-20, 1] size = int(1E6) multinomial_robust(1E18, p, size).mean(0) # array([ 1.70000000e-05, 9.00000000e-05, 9.76000000e-04, # 1.00620000e-02, 1.00000000e+18]) We see that even for these very small probabilities, the multinomial values are turning up in the right proportion. The result is a very robust and very fast approximation to the multinomial distribution for small p.
How do I know if I can disable SQLALCHEMY_TRACK_MODIFICATIONS?
Every time I run my app that uses Flask-SQLAlchemy I get the following warning that the SQLALCHEMY_TRACK_MODIFICATIONS option will be disabled. /home/david/.virtualenvs/flask-sqlalchemy/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning. warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.') I tried to find out what this option does, but the Flask-SQLAlchemy documentation isn't clear about what uses this tracking. SQLALCHEMY_TRACK_MODIFICATIONS If set to True (the default) Flask-SQLAlchemy will track modifications of objects and emit signals. This requires extra memory and can be disabled if not needed. How do I find out if my project requires SQLALCHEMY_TRACK_MODIFICATIONS = True or if I can safely disable this feature and save memory on my server?
Most likely your application doesn't use the Flask-SQLAlchemy event system, so you're probably safe to turn off. You'll need to audit the code to verify--you're looking for anything that hooks into models_committed or before_models_committed. If you do find that you're using the Flask-SQLAlchemy event system, you probably should update the code to use SQLAlchemy's built-in event system instead. To turn off the Flask-SQLAlchemy event system (and disable the warning), just add SQLALCHEMY_TRACK_MODIFICATIONS = False to your app config until the default is changed (most likely in Flask-SQLAlchemy v3). Background--here's what the warning is telling you: Flask-SQLAlchemy has its own event notification system that gets layered on top of SQLAlchemy. To do this, it tracks modifications to the SQLAlchemy session. This takes extra resources, so the option SQLALCHEMY_TRACK_MODIFICATIONS allows you to disable the modification tracking system. Currently the option defaults to True, but in the future, that default will change to False, thereby disabling the event system. As far as I understand, the rationale for the change is three-fold: Not many people use Flask-SQLAlchemy's event system, but most people don't realize they can save system resources by disabling it. So a saner default is to disable it and those who want it can turn it on. The event system in Flask-SQLAlchemy has been rather buggy (see issues linked to in the pull request mentioned below), requiring additional maintenance for a feature that few people use. In v0.7, SQLAlchemy itself added a powerful event system including the ability to create custom events. Ideally, the Flask-SQLAlchemy event system should do nothing more than create a few custom SQLAlchemy event hooks and listeners, and then let SQLAlchemy itself manage the event trigger. You can see more in the discussion around the pull request that started triggering this warning.
What does from __future__ import absolute_import actually do?
I have answered a question regarding absolute imports in Python, which I thought I understood based on reading the Python 2.5 changelog and accompanying PEP. However, upon installing Python 2.5 and attempting to craft an example of properly using from __future__ import absolute_import, I realize things are not so clear. Straight from the changelog linked above, this statement accurately summarized my understanding of the absolute import change: Let's say you have a package directory like this: pkg/ pkg/__init__.py pkg/main.py pkg/string.py This defines a package named pkg containing the pkg.main and pkg.string submodules. Consider the code in the main.py module. What happens if it executes the statement import string? In Python 2.4 and earlier, it will first look in the package's directory to perform a relative import, finds pkg/string.py, imports the contents of that file as the pkg.string module, and that module is bound to the name "string" in the pkg.main module's namespace. So I created this exact directory structure: $ ls -R .: pkg/ ./pkg: __init__.py main.py string.py __init__.py and string.py are empty. main.py contains the following code: import string print string.ascii_uppercase As expected, running this with Python 2.5 fails with an AttributeError: $ python2.5 pkg/main.py Traceback (most recent call last): File "pkg/main.py", line 2, in <module> print string.ascii_uppercase AttributeError: 'module' object has no attribute 'ascii_uppercase' However, further along in the 2.5 changelog, we find this (emphasis added): In Python 2.5, you can switch import's behaviour to absolute imports using a from __future__ import absolute_import directive. This absolute-import behaviour will become the default in a future version (probably Python 2.7). Once absolute imports are the default, import string will always find the standard library's version. I thus created pkg/main2.py, identical to main.py but with the additional future import directive. It now looks like this: from __future__ import absolute_import import string print string.ascii_uppercase Running this with Python 2.5, however... fails with an AttributeError: $ python2.5 pkg/main2.py Traceback (most recent call last): File "pkg/main2.py", line 3, in <module> print string.ascii_uppercase AttributeError: 'module' object has no attribute 'ascii_uppercase' This pretty flatly contradicts the statement that import string will always find the std-lib version with absolute imports enabled. What's more, despite the warning that absolute imports are scheduled to become the "new default" behavior, I hit this same problem using both Python 2.7, with or without the __future__ directive: $ python2.7 pkg/main.py Traceback (most recent call last): File "pkg/main.py", line 2, in <module> print string.ascii_uppercase AttributeError: 'module' object has no attribute 'ascii_uppercase' $ python2.7 pkg/main2.py Traceback (most recent call last): File "pkg/main2.py", line 3, in <module> print string.ascii_uppercase AttributeError: 'module' object has no attribute 'ascii_uppercase' as well as Python 3.5, with or without (assuming the print statement is changed in both files): $ python3.5 pkg/main.py Traceback (most recent call last): File "pkg/main.py", line 2, in <module> print(string.ascii_uppercase) AttributeError: module 'string' has no attribute 'ascii_uppercase' $ python3.5 pkg/main2.py Traceback (most recent call last): File "pkg/main2.py", line 3, in <module> print(string.ascii_uppercase) AttributeError: module 'string' has no attribute 'ascii_uppercase' I have tested other variations of this. Instead of string.py, I have created an empty module -- a directory named string containing only an empty __init__.py -- and instead of issuing imports from main.py, I have cd'd to pkg and run imports directly from the REPL. Neither of these variations (nor a combination of them) changed the results above. I cannot reconcile this with what I have read about the __future__ directive and absolute imports. It seems to me that this is easily explicable by the following (this is from the Python 2 docs but this statement remains unchanged in the same docs for Python 3): sys.path (...) As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. So what am I missing? Why does the __future__ statement seemingly not do what it says, and what is the resolution of this contradiction between these two sections of documentation, as well as between described and actual behavior?
The changelog is sloppily worded. from __future__ import absolute_import does not care about whether something is part of the standard library, and import string will not always give you the standard-library module with absolute imports on. from __future__ import absolute_import means that if you import string, Python will always look for a top-level string module, rather than current_package.string. However, it does not affect the logic Python uses to decide what file is the string module. When you do python pkg/script.py pkg/script.py doesn't look like part of a package to Python. Following the normal procedures, the pkg directory is added to the path, and all .py files in the pkg directory look like top-level modules. import string finds pkg/string.py not because it's doing a relative import, but because pkg/string.py appears to be the top-level module string. The fact that this isn't the standard-library string module doesn't come up. To run the file as part of the pkg package, you could do python -m pkg.script In this case, the pkg directory will not be added to the path. However, the current directory will be added to the path. You can also add some boilerplate to pkg/script.py to make Python treat it as part of the pkg package even when run as a file: if __name__ == '__main__' and __package__ is None: __package__ = 'pkg' However, this won't affect sys.path. You'll need some additional handling to remove the pkg directory from the path, and if pkg's parent directory isn't on the path, you'll need to stick that on the path too.
How to install xgboost package in python (windows platform)?
http://xgboost.readthedocs.org/en/latest/python/python_intro.html On the homepage of xgboost(above link), it says: To install XGBoost, do the following steps: You need to run make in the root directory of the project In the python-package directory run python setup.py install However, when I did it, for step 1 the following error appear: make : The term 'make' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. then I skip step1 and did step 2 directly, another error appear: Traceback (most recent call last): File "setup.py", line 19, in <module> LIB_PATH = libpath['find_lib_path']() File "xgboost/libpath.py", line 44, in find_lib_path 'List of candidates:\n' + ('\n'.join(dll_path))) __builtin__.XGBoostLibraryNotFound: Cannot find XGBoost Libarary in the candicate path, did you install compilers and run build.sh in root path? Does anyone know how to install xgboost for python on Windows10 platform? Thanks for your help!
Note that as of the most recent release the Microsoft Visual Studio instructions no longer seem to apply as this link returns a 404 error: https://github.com/dmlc/xgboost/tree/master/windows You can read more about the removal of the MSVC build from Tianqi Chen's comment here. So here's what I did to finish a 64-bit build on Windows: Download and install MinGW-64: http://sourceforge.net/projects/mingw-w64/ On the first screen of the install prompt make sure you set the Architecture to x86_64 and the Threads to win32 I installed to C:\mingw64 (to avoid spaces in the file path) so I added this to my PATH environment variable: C:\mingw64\mingw64\bin I also noticed that the make utility that is included in bin\mingw64 is called mingw32-make so to simplify things I just renamed this to make Open a Windows command prompt and type gcc. You should see something like "fatal error: no input file" Next type make. You should see something like "No targets specified and no makefile found" Type git. If you don't have git, install it and add it to your PATH. These should be all the tools you need to build the xgboost project. To get the source code run these lines: cd c:\ git clone --recursive https://github.com/dmlc/xgboost cd xgboost git submodule init git submodule update cp make/mingw64.mk config.mk make -j4 Note that I ran this part from a Cygwin shell. If you are using the Windows command prompt you should be able to change cp to copy and arrive at the same result. However, if the build fails on you for any reason I would recommend trying again using cygwin. If the build finishes successfully, you should have a file called xgboost.exe located in the project root. To install the Python package, do the following: cd python-package python setup.py install Now you should be good to go. Open up Python, and you can import the package with: import xgboost as xgb To test the installation, I went ahead and ran the basic_walkthrough.py file that was included in the demo/guide-python folder of the project and didn't get any errors.
Tensorflow: How to restore a previously saved model (python)
I want to make a prediction with a model already saved by tensorflow, so I need to restore the model first. The code I have fails because I can't call Saver() without the variables of the model, but that is exactly what I want to load! Do I need to create a dummy model with the same variables first and then restore my saved model and replace the dummy one? Maybe I'm approaching this incorrectly? import tensorflow as tf saver = tf.train.Saver() with tf.Session() as sess: ckpt = tf.train.get_checkpoint_state("my_path_to_model") if ckpt and ckpt.model_checkpoint_path: # Restores from checkpoint saver.restore(sess, "my_path_to_model") print "Model loaded" else: print "No checkpoint file found" I already looked into the code of the cifar10 model example, where a saved model is restored, but I can't understand the logic in the code. As you can tell, I'm just starting with this. Any help would be really appreciated
The checkpoints that are saved contain values for the Variables in your model, not the model/graph itself, which means that the graph should be the same when you restore the checkpoint. Here's an example for a linear regression where there's a training loop that saves variable checkpoints and an evaluation section that will restore variables saved in a prior run and compute predictions. Of course, you can also restore variables and continue training if you'd like. x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) w = tf.Variable(tf.zeros([1, 1], dtype=tf.float32)) b = tf.Variable(tf.ones([1, 1], dtype=tf.float32)) y_hat = tf.add(b, tf.matmul(x, w)) ...more setup for optimization and what not... saver = tf.train.Saver() # defaults to saving all variables - in this case w and b with tf.Session() as sess: sess.run(tf.initialize_all_variables()) if FLAGS.train: for i in xrange(FLAGS.training_steps): ...training loop... if (i + 1) % FLAGS.checkpoint_steps == 0: saver.save(sess, FLAGS.checkpoint_dir + 'model.ckpt', global_step=i+1) else: # Here's where you're restoring the variables w and b. # Note that the graph is exactly as it was when the variables were # saved in a prior training run. ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) else: ...no checkpoint found... # Now you can run the model to get predictions batch_x = ...load some data... predictions = sess.run(y_hat, feed_dict={x: batch_x}) Here are the docs for Variables, which cover saving and restoring. And here are the docs for the Saver.
How can I visualize the weights(variables) in cnn in Tensorflow?
After training the cnn model, I want to visualize the weight or print out the weights, what can I do? I cannot even print out the variables after training. Thank you!
To visualize the weights, you can use a tf.image_summary() op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a tf.train.SummaryWriter, and visualize the log using TensorBoard. Let's say you have the following (simplified) program: filter = tf.Variable(tf.truncated_normal([8, 8, 3])) images = tf.placeholder(tf.float32, shape=[None, 28, 28]) conv = tf.nn.conv2d(images, filter, strides=[1, 1, 1, 1], padding="SAME") # More ops... loss = ... optimizer = tf.GradientDescentOptimizer(0.01) train_op = optimizer.minimize(loss) filter_summary = tf.image_summary(filter) sess = tf.Session() summary_writer = tf.train.SummaryWriter('/tmp/logs', sess.graph_def) for i in range(10000): sess.run(train_op) if i % 10 == 0: # Log a summary every 10 steps. summary_writer.add_summary(filter_summary, i) After doing this, you can start TensorBoard to visualize the logs in /tmp/logs, and you will be able to see a visualization of the filter. Note that this trick visualizes depth-3 filters as RGB images (to match the channels of the input image). If you have deeper filters, or they don't make sense to interpret as color channels, you can use the tf.split() op to split the filter on the depth dimension, and generate one image summary per depth.
How is `min` of two integers just as fast as 'bit hacking'?
I was watching a lecture series on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers: return x ^ ((y ^ x) & -(x > y)) Which said to be faster than: if x < y: return x else: return y Since the min function can handle more than just two integers (floats, strings, lists, and even custom objects) I assumed that calling min(x, y) would take longer than the optimized bit hack above. To my surprise, they were nearly identical: >>> python -m timeit "min(4, 5)" 1000000 loops, best of 3: 0.203 usec per loop >>> python -m timeit "4 ^ ((5 ^ 4) & -(4 > 5))" 10000000 loops, best of 3: 0.19 usec per loop This is true even for numbers greater than 255 (pre allocated python integer objects) >>> python -m timeit "min(15456, 54657)" 10000000 loops, best of 3: 0.191 usec per loop python -m timeit "15456 ^ ((54657 ^ 15456) & -(54657 > 15456))" 10000000 loops, best of 3: 0.18 usec per loop How is it that a function so versatile like min can still be so fast and optimized? Note: I ran the above code using Python 3.5. I'm assuming that this is the same for Python 2.7+ but haven't tested I've created the following c module: #include <Python.h> static PyObject * my_min(PyObject *self, PyObject *args){ const long x; const long y; if (!PyArg_ParseTuple(args, "ll", &x, &y)) return NULL; return PyLong_FromLong(x ^ ((y ^ x) & -(x > y))); } static PyMethodDef MyMinMethods[] = { { "my_min", my_min, METH_VARARGS, "bit hack min" }, {NULL, NULL, 0, NULL} }; PyMODINIT_FUNC initmymin(void) { PyObject *m; m = Py_InitModule("mymin", MyMinMethods); if (m == NULL) return; } Compiled it, and installed it onto my system (an ubuntu VM machine). I then ran the following: >>> python -m timeit 'min(4, 5)' 10000000 loops, best of 3: 0.11 usec per loop >>> python -m timeit -s 'import mymin' 'mymin.my_min(4,5)' 10000000 loops, best of 3: 0.129 usec per loop While I understand that this is a VM machine, shouldn't there still be a greater gap in execution time with the 'bit hacking' being offloaded into native c?
This is likely due to how the min function is implemented in python. Many python builtins are actually implemented in low level languages such as C or assembly and use the python apis in order to be callable in python. Your bit fiddling technique is likely very fast in C but in python the interpretation overhead of the statement will far exceed the overhead of calling even a complex function implemented in a low level language. If you really want a fair test compare a C program, or C python extension implementing that technique to your python call of min and see how it compares, I expect that will explain the result you see. EDIT: Thanks to @Two-BitAlchemist I can now give some more details onto additional reasons this bit twiddling will not work well in python. It appears that integers are not stored in the obvious way but are actually a fairly complex expanding object designed to store potentially very large numbers. Some details on this are findable here (Thanks to Two-BitAlchemist) though it appears this is changed somewhat in newer python versions. Still the point remains that we are most certainly not manipulation a simple set of bits when we touch an integer in python, but a complex object where the bit manipulations are in fact virtual method calls with enormous overhead (compared to what they do).
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip
I'm getting an error Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? when trying to install lxml through pip. c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 I don't find any libxml2 dev packages to install via pip. Using Python 2.7.10 on x86 in a virtualenv under Windows 10.
I had this issue and realised that whilst I did have libxml2 installed, I didn't have the necessary development libraries required by the python package. Installing them solved the problem: sudo apt-get install libxml2-dev libxslt1-dev sudo pip install lxml
TensorFlow Error found in Tutorial
Dare I even ask? This is such a new technology at this point that I can't find a way to solve this seemingly simple error. The tutorial I'm going over can be found here- http://www.tensorflow.org/tutorials/mnist/pros/index.html#deep-mnist-for-experts I literally copied and pasted all of the code into IPython Notebook and at the very last chunk of code I get an error. # To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above. # The differences are that: we will replace the steepest gradient descent optimizer with the more sophisticated ADAM optimizer. cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) sess.run(tf.initialize_all_variables()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) print "step %d, training accuracy %g"%(i, train_accuracy) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print "test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}) After running this code, I receive this error. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-46-a5d1ab5c0ca8> in <module>() 15 16 print "test accuracy %g"%accuracy.eval(feed_dict={ ---> 17 x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}) /root/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in eval(self, feed_dict, session) 403 404 """ --> 405 return _eval_using_default_session(self, feed_dict, self.graph, session) 406 407 /root/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _eval_using_default_session(tensors, feed_dict, graph, session) 2712 session = get_default_session() 2713 if session is None: -> 2714 raise ValueError("Cannot evaluate tensor using eval(): No default " 2715 "session is registered. Use 'with " 2716 "DefaultSession(sess)' or pass an explicit session to " ValueError: Cannot evaluate tensor using eval(): No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess) I thought that I may need to install or reinstall TensorFlow via conda install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl but conda doesn't even know how to install it. Does anyone have any idea of how to work around this error?
I figured it out. As you see in the value error, it says No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess) so the answer I came up with is to pass an explicit session to eval, just like it says. Here is where I made the changes. if i%100 == 0: train_accuracy = accuracy.eval(session=sess, feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) And train_step.run(session=sess, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) Now the code is working fine.
Tensorflow: Using Adam optimizer
I am experimenting with some simple models in tensorflow, including one that looks very similar to the first MNIST for ML Beginners example, but with a somewhat larger dimensionality. I am able to use the gradient descent optimizer with no problems, getting good enough convergence. When I try to use the ADAM optimizer, I get errors like this: tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable_21/Adam [[Node: Adam_2/update_Variable_21/ApplyAdam = ApplyAdam[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_21, Variable_21/Adam, Variable_21/Adam_1, beta1_power_2, beta2_power_2, Adam_2/learning_rate, Adam_2/beta1, Adam_2/beta2, Adam_2/epsilon, gradients_11/add_10_grad/tuple/control_dependency_1)]] where the specific variable that complains about being uninitialized changes depending on the run. What does this error mean? And what does it suggest is wrong? It seems to occur regardless of the learning rate I use.
The AdamOptimizer class creates additional variables, called "slots", to hold values for the "m" and "v" accumulators. See the source here if you're curious, it's actually quite readable: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/adam.py#L39 . Other optimizers, such as Momentum and Adagrad use slots too. These variables must be initialized before you can train a model. The normal way to initialize variables is to call tf.initialize_all_variables() which adds ops to initialize the variables present in the graph when it is called. (Aside: unlike its name suggests, initialize_all_variables() does not initialize anything, it only add ops that will initialize the variables when run.) What you must do is call initialize_all_variables() after you have added the optimizer: ...build your model... # Add the optimizer train_op = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) # Add the ops to initialize variables. These will include # the optimizer slots added by AdamOptimizer(). init_op = tf.initialize_all_variables() # launch the graph in a session sess = tf.Session() # Actually intialize the variables sess.run(init_op) # now train your model for ...: sess.run(train_op)
Why are log2 and log1p so much faster than log and log10?
Whilst playing around with this question I noticed something I couldn't explain regarding the relative performance of np.log2, np.log and np.log10: In [1]: %%timeit x = np.random.rand(100000) ....: np.log2(x) ....: 1000 loops, best of 3: 1.31 ms per loop In [2]: %%timeit x = np.random.rand(100000) np.log(x) ....: 100 loops, best of 3: 3.64 ms per loop In [3]: %%timeit x = np.random.rand(100000) np.log10(x) ....: 100 loops, best of 3: 3.93 ms per loop np.log2 is about 3x faster than np.log and np.log10. Perhaps even more counter-intuitively, np.log1p(x), which computes ln(x + 1), is on par with np.log2: In [4]: %%timeit x = np.random.rand(100000) np.log1p(x) ....: 1000 loops, best of 3: 1.46 ms per loop I obtained almost identical timings in numpy v1.10.1 and v1.8.2. Is there an intuitive explanation for these discrepancies in runtime performance?
This is just a note, but longer than a comment. Apparently this has to do with your particular install: import numpy as np import numexpr as ne x = np.random.rand(100000) I get the same timings with numpy 1.10 from conda and a version compiled with icc: %timeit np.log2(x) 1000 loops, best of 3: 1.24 ms per loop %timeit np.log(x) 1000 loops, best of 3: 1.28 ms per loop I thought it might have something to with grabbing the MKL VML package, but looks like thats a no: %timeit ne.evaluate('log(x)') 1000 loops, best of 3: 218 µs per loop Looks like your numpy install is grabbing its log/log2 implementation from two different places which is odd.
check if a key exists in a bucket in s3 using boto3
I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches. But that seems longer and an overkill. Boto3 official docs explicitly state how to do this. May be I am missing the obvious. Can anybody point me how I can achieve this.
Boto 2's boto.s3.key.Key object used to have an exists method that checked if the key existed on S3 by doing a HEAD request and looking at the the result, but it seems that that no longer exists. You have to do it yourself: import boto3 import botocore s3 = boto3.resource('s3') exists = False try: s3.Object('my-bucket', 'dootdoot.jpg').load() except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] == "404": exists = False else: raise e else: exists = True print(exists) That should print True if the key exists and False otherwise. load() does a HEAD request for a single key, so it should be relatively quick. Of course, you might be checking if the object exists because you're planning on using it. If that is the case, you can just forget about the load() and do a get() directly, then handle the error case there.
Demystifying sharedctypes performance
In python it is possible to share ctypes objects between multiple processes. However I notice that allocating these objects seems to be extremely expensive. Consider following code: from multiprocessing import sharedctypes as sct import ctypes as ct import numpy as np n = 100000 l = np.random.randint(0, 10, size=n) def foo1(): sh = sct.RawArray(ct.c_int, l) return sh def foo2(): sh = sct.RawArray(ct.c_int, len(l)) sh[:] = l return sh %timeit foo1() %timeit foo2() sh1 = foo1() sh2 = foo2() for i in range(n): assert sh1[i] == sh2[i] The output is: 10 loops, best of 3: 30.4 ms per loop 100 loops, best of 3: 9.65 ms per loop There are two things that puzzle me: Why is explicit allocation and initialization compared to passing a numpy array so much faster? Why is allocating shared memory in python so extremely expensive? %timeit np.arange(n) only takes 46.4 µs. There are several orders of magnitude between those timings.
Sample Code I rewrote your sample code a little bit to look into this issue. Here's where I landed, I'll use it in my answer below: so.py: from multiprocessing import sharedctypes as sct import ctypes as ct import numpy as np n = 100000 l = np.random.randint(0, 10, size=n) def sct_init(): sh = sct.RawArray(ct.c_int, l) return sh def sct_subscript(): sh = sct.RawArray(ct.c_int, n) sh[:] = l return sh def ct_init(): sh = (ct.c_int * n)(*l) return sh def ct_subscript(): sh = (ct.c_int * n)(n) sh[:] = l return sh Note that I added two test cases that do not use shared memory (and use regular a ctypes array instead). timer.py: import traceback from timeit import timeit for t in ["sct_init", "sct_subscript", "ct_init", "ct_subscript"]: print(t) try: print(timeit("{0}()".format(t), setup="from so import {0}".format(t), number=100)) except Exception as e: print("Failed:", e) traceback.print_exc() print print() print ("Test",) from so import * sh1 = sct_init() sh2 = sct_subscript() for i in range(n): assert sh1[i] == sh2[i] print("OK") Test results The results from running the above code using Python 3.6a0 (specifically 3c2fbdb) are: sct_init 2.844902500975877 sct_subscript 0.9383537038229406 ct_init 2.7903486443683505 ct_subscript 0.978101353161037 Test OK What's interesting is that if you change n, the results scale linearly. For example, using n = 100000 (10 times bigger), you get something that's pretty much 10 times slower: sct_init 30.57974253082648 sct_subscript 9.48625904135406 ct_init 30.509132395964116 ct_subscript 9.465419146697968 Test OK Speed difference In the end, the speed difference lies in the hot loop that is called to initialize the array by copying every single value over from the Numpy array (l) to the new array (sh). This makes sense, because as we noted speed scales linearly with array size. When you pass the Numpy array as a constructor argument, the function that does this is Array_init. However, if you assign using sh[:] = l, then it's Array_ass_subscript that does the job. Again, what matters here are the hot loops. Let's look at them. Array_init hot loop (slower): for (i = 0; i < n; ++i) { PyObject *v; v = PyTuple_GET_ITEM(args, i); if (-1 == PySequence_SetItem((PyObject *)self, i, v)) return -1; } Array_ass_subscript hot loop (faster): for (cur = start, i = 0; i < otherlen; cur += step, i++) { PyObject *item = PySequence_GetItem(value, i); int result; if (item == NULL) return -1; result = Array_ass_item(myself, cur, item); Py_DECREF(item); if (result == -1) return -1; } As it turns out, the majority of the speed difference lies in using PySequence_SetItem vs. Array_ass_item. Indeed, if you change the code for Array_init to use Array_ass_item instead of PySequence_SetItem (if (-1 == Array_ass_item((PyObject *)self, i, v))), and recompile Python, the new results become: sct_init 11.504781467840075 sct_subscript 9.381130554247648 ct_init 11.625461496878415 ct_subscript 9.265848568174988 Test OK Still a bit slower, but not by much. In other words, most of the overhead is caused by a slower hot loop, and mostly caused by the code that PySequence_SetItem wraps around Array_ass_item. This code might appear like little overhead at first read, but it really isn't. PySequence_SetItem actually calls into the entire Python machinery to resolve the __setitem__ method and call it. This eventually resolves in a call to Array_ass_item, but only after a large number of levels of indirection (which a direct call to Array_ass_item would bypass entirely!) Going through the rabbit hole, the call sequence looks a bit like this: s->ob_type->tp_as_sequence->sq_ass_item points to slot_sq_ass_item. slot_sq_ass_item calls into call_method. call_method calls into PyObject_Call And on and on until we eventually get to Array_ass_item..! In other words, we have C code in Array_init that's calling Python code (__setitem__) in a hot loop. That's slow. Why ? Now, why does Python use PySequence_SetItem in Array_init and not Array_ass_item in Array_init? That's because if it did, it would be bypassing the hooks that are exposed to the developer in Python-land. Indeed, you can intercept calls to sh[:] = ... by subclassing the array and overriding __setitem__ (__setslice__ in Python 2). It will be called once, with a slice argument for the index. Likewise, defining your own __setitem__ also overrides the logic in the constructor. It will be called N times, with an integer argument for the index. This means that if Array_init directly called into Array_ass_item, then you would lose something: __setitem__ would no longer be called in the constructor, and you wouldn't be able to override the behavior anymore. Now can we try to retain the faster speed all the while still exposing the same Python hooks? Well, perhaps, using this code in Array_init instead of the existing hot loop: return PySequence_SetSlice((PyObject*)self, 0, PyTuple_GET_SIZE(args), args); Using this will call into __setitem__ once with a slice argument (on Python 2, it would call into __setslice__). We still go through the Python hooks, but we only do it once instead of N times. Using this code, the performance becomes: sct_init 12.24651838419959 sct_subscript 10.984305887017399 ct_init 12.138383641839027 ct_subscript 11.79078131634742 Test OK Other overhead I think the rest of the overhead may be due to the tuple instantiation that takes place when calling __init__ on the array object (note the *, and the fact that Array_init expects a tuple for args) — this presumably scales with n as well. Indeed, if you replace sh[:] = l with sh[:] = tuple(l) in the test case, then the performance results become almost identical. With n = 100000: sct_init 11.538272527977824 sct_subscript 10.985187001060694 ct_init 11.485244687646627 ct_subscript 10.843198659364134 Test OK There's probably still something smaller going on, but ultimately we're comparing two substantially different hot loops. There's simply little reason to expect them to have identical performance. I think it might be interesting to try calling Array_ass_subscript from Array_init for the hot loop and see the results, though! Baseline speed Now, to your second question, regarding allocating shared memory. Note that there isn't really a cost to allocating shared memory. As noted in the results above, there isn't a substantial difference between using shared memory or not. Looking at the Numpy code (np.arange is implemented here), we can finally understand why it's so much faster than sct.RawArray: np.arange doesn't appear to make calls to Python "user-land" (i.e. no call to PySequence_GetItem or PySequence_SetItem). That doesn't necessarily explain all the difference, but you'd probably want to start investigating there.
Generate random number outside of range in python
I'm currently working on a pygame game and I need to place objects randomly on the screen, except they cannot be within a designated rectangle. Is there an easy way to do this rather than continuously generating a random pair of coordinates until it's outside of the rectangle? Here's a rough example of what the screen and the rectangle look like. ______________ | __ | | |__| | | | | | |______________| Where the screen size is 1000x800 and the rectangle is [x: 500, y: 250, width: 100, height: 75] A more code oriented way of looking at it would be x = random_int 0 <= x <= 1000 and 500 > x or 600 < x y = random_int 0 <= y <= 800 and 250 > y or 325 < y
Partition the box into a set of sub-boxes. Among the valid sub-boxes, choose which one to place your point in with probability proportional to their areas Pick a random point uniformly at random from within the chosen sub-box. This will generate samples from the uniform probability distribution on the valid region, based on the chain rule of conditional probability.
Correct way of "Absolute Import" in Python 2.7
Python 2.7.10 In virtualenv Enable from __future__ import absolute_import in each module The directory tree looks like: Project/ prjt/ __init__.py pkg1/ __init__.py module1.py tests/ __init__.py test_module1.py pkg2/ __init__.py module2.py tests/ __init__.py test_module2.py pkg3/ __init__.py module3.py tests/ __init__.py test_module3.py data/ log/ I tried to use the function compute() of pkg2/module2.py in pkg1/module1.py by writing like: # In module1.py import sys sys.path.append('/path/to/Project/prjt') from prjt.pkg2.module2 import compute But when I ran python module1.py, the interpreter raised an ImportError that No module named prjt.pkg2.module2. What is the correct way of "absolute import"? Do I have to add the path to Project to sys.path? How could I run test_module1.py in the interactive interpreter? By python prjt/pkg1/tests/test_module1.py or python -m prjt/pkg1/tests/test_module1.py?
How python find module python will find module from sys.path, and the first entry sys.path[0] is '' means, python will find module from the current working directory import sys print sys.path and python find third-party module from site-packages so to absolute import, you can append your package to the sys.path import sys sys.path.append('the_folder_of_your_package') import module_you_created module_you_created.fun() export PYTHONPATH the PYTHONPATH will be imported into sys.path before execution export PYTHONPATH=the_folder_of_your_package import sys [p for p in sys.path if 'the_folder_of_your_package' in p] How could I run test_module1.py in the interactive interpreter? By python Project/pkg1/tests/test_module1.py or python -m Project/pkg1/tests/test_module1.py? you can use if __name__ == '__main__': idiomatic way, and use python Project/pkg1/tests/test_module1.py if __name__ = '__main__': main()
Precedence of "in" in Python
This is a bit of a (very basic) language-lawyer kind of question. I understand what the code does, and why, so please no elementary explanations. In an expression, in has higher precedence than and. So if I write if n in "seq1" and "something": ... it is interpreted just like if (n in "seq1") and "something": ... However, the in of a for loop has lower precedence than and (in fact it has to, otherwise the following would be a syntax error). Hence if a Python beginner writes for n in "seq1" and "something": ... ..., it is equivalent to this: for n in ("seq1" and "something"): ... (which, provided "seq1" is truthy, evaluates to for n in "something"). So, the question: Where is the precedence of the for-loop's in keyword specified/documented? I understand that n in ... is not an expression in this context (it does not have a value), but is part of the for statement's syntax. Still, I'm not sure how/where non-expression precedence is specified.
In the context of a for statement, the in is just part of the grammar that makes up that compound statement, and so it is distinct from the operator in. The Python grammar specification defines a for statement like this: for_stmt ::= "for" target_list "in" expression_list ":" suite ["else" ":" suite] The point to make is that this particular in will not be interpreted as part of target_list, because a comparison operation (e.g. x in [x]) is not a valid target. Referring to the grammar specification again, target_list and target are defined as follows: target_list ::= target ("," target)* [","] target ::= identifier | "(" target_list ")" | "[" target_list "]" | attributeref | subscription | slicing | "*" target So the grammar ensures that the parser sees the first in token after a target_list as part of the for ... in ... statement, and not as a binary operator. This is why trying to write things very strange like for (x in [x]) in range(5): will raise a syntax error: Python's grammar does not permit comparisons like (x in [x]) to be targets. Therefore for a statement such as for n in "seq1" and "something" is unambiguous. The target_list part is the identifier n and the expression_list part is the iterable that "seq1" and "something" evaluates to. As the linked documentation goes on to say, each item from the iterable is assigned to target_list in turn.
Count number of non-NaN entries in each column of Spark dataframe with Pyspark
I have a very large dataset that is loaded in Hive. It consists of about 1.9 million rows and 1450 columns. I need to determine the "coverage" of each of the columns, meaning, the fraction of rows that have non-NaN values for each column. Here is my code: from pyspark import SparkContext from pyspark.sql import HiveContext import string as string sc = SparkContext(appName="compute_coverages") ## Create the context sqlContext = HiveContext(sc) df = sqlContext.sql("select * from data_table") nrows_tot = df.count() covgs=sc.parallelize(df.columns) .map(lambda x: str(x)) .map(lambda x: (x, float(df.select(x).dropna().count()) / float(nrows_tot) * 100.)) Trying this out in the pyspark shell, if I then do covgs.take(10), it returns a rather large error stack. It says that there's a problem in save in the file /usr/lib64/python2.6/pickle.py. This is the final part of the error: py4j.protocol.Py4JError: An error occurred while calling o37.__getnewargs__. Trace: py4j.Py4JException: Method __getnewargs__([]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:333) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:342) at py4j.Gateway.invoke(Gateway.java:252) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:207) at java.lang.Thread.run(Thread.java:745) If there is a better way to accomplish this than the way I'm trying, I'm open to suggestions. I can't use pandas, though, as it's not currently available on the cluster I work on and I don't have rights to install it.
Let's start with a dummy data: from pyspark.sql import Row row = Row("x", "y", "z") df = sc.parallelize([ row(0, 1, 2), row(None, 3, 4), row(None, None, 5)]).toDF() ## +----+----+---+ ## | x| y| z| ## +----+----+---+ ## | 0| 1| 2| ## |null| 3| 4| ## |null|null| 5| ## +----+----+---+ All you need is a simple aggregation: from pyspark.sql.functions import col, count, sum def count_not_null(c): """Use conversion between boolean and integer - False -> 0 - True -> 1 """ return sum(col(c).isNotNull().cast("integer")).alias(c) exprs = [count_not_null(c) for c in df.columns] df.agg(*exprs).show() ## +---+---+---+ ## | x| y| z| ## +---+---+---+ ## | 1| 2| 3| ## +---+---+---+ You can also leverage SQL NULL semantics to achieve the same result without creating a custom function: df.agg(*[ count(c).alias(c) # vertical (column-wise) operations in SQL ignore NULLs for c in df.columns ]).show() ## +---+---+---+ ## | x| y| z| ## +---+---+---+ ## | 1| 2| 3| ## +---+---+---+ If you prefer fractions: exprs = [(count_not_null(c) / count("*")).alias(c) for c in df.columns] df.agg(*exprs).show() ## +------------------+------------------+---+ ## | x| y| z| ## +------------------+------------------+---+ ## |0.3333333333333333|0.6666666666666666|1.0| ## +------------------+------------------+---+ or # COUNT(*) is equivalent to COUNT(1) so NULLs won't be an issue df.select(*[(count(c) / count("*")).alias(c) for c in df.columns]).show() ## +------------------+------------------+---+ ## | x| y| z| ## +------------------+------------------+---+ ## |0.3333333333333333|0.6666666666666666|1.0| ## +------------------+------------------+---+
Check constraint for mutually exclusive columns in SQLAlchemy
If I have a SQLAlchemy declarative model like below: class Test(Model): __tablename__ = 'tests' id = Column(Integer, Sequence('test_id_seq'), primary_key=True) ... Atest_id = Column(Integer, ForeignKey('Atests.id'), nullable=True) Btest_id = Column(Integer, ForeignKey('Btests.id'), nullable=True) Ctest_id = Column(Integer, ForeignKey('Ctests.id'), nullable=True) Dtest_id = Column(Integer, ForeignKey('Dtests.id'), nullable=True) Etest_id = Column(Integer, ForeignKey('Etests.id'), nullable=True) ... date = Column(DateTime) status = Column(String(20)) # pass, fail, needs_review And I would like to ensure that only one of the *test_id foreign keys is present in a given row, how might I accomplish that in SQLAlchemy? I see that there is an SQLAlchemy CheckConstraint object (see docs), but MySQL does not support check constraints. The data model has interaction outside of SQLAlchemy, so preferably it would be a database-level check (MySQL)
Well, considering your requisites "The data model has interaction outside of SQLAlchemy, so preferably it would be a database-level check (MySQL)" and 'ensure that only one [..] is not null'. I think the best approach is to write a trigger like this: DELIMITER $$ CREATE TRIGGER check_null_insert BEFORE INSERT ON my_table FOR EACH ROW BEGIN IF CHAR_LENGTH(CONCAT_WS('', NEW.a-NEW.a, NEW.b-NEW.b, NEW.c-NEW.c)) = 1 THEN UPDATE `Error: Only one value of *test_id must be not null` SET z=0; END IF; END$$ DELIMITER ; Some tricks and considerations: IF STATEMENT: In order to avoid the tedious writing of check each column is not null while others are null, I did this trick: Reduce each column to one character and check how many characters exist. Note that NEW.a-NEW.a always returns 1 character if NEW.a is an Integer, NULL returns 0 characters and the operation NULL-NULL returns NULL on MySQL. ERROR TRIGGERING: I suppose you want to raise an error, so how to do this on MySQL? You didn't mention the MySQL version. Only on MySQL 5.5 you can use the SIGNAL syntax to throw an exception. So the more portable way is issuing an invalid statement like: UPDATE xx SET z=0. If you are using MySQL 5.5 you could use: signal sqlstate '45000' set message_text = 'Error: Only one value of *test_id must be not null'; instead of UPDATE `Error: Only one value of *test_id must be not null` SET z=0; Also, I think you want to check this on updates too, so use: DELIMITER $$ CREATE TRIGGER check_null_update BEFORE UPDATE ON my_table FOR EACH ROW BEGIN IF CHAR_LENGTH(CONCAT_WS('', NEW.a-NEW.a, NEW.b-NEW.b, NEW.c-NEW.c)) = 1 THEN UPDATE `Error: Only one value of *test_id must be not null` SET z=0; END IF; END$$ DELIMITER ; Or create a stored procedure and call it. Update For databases that supports check constraints, the code is more simple, see this example for SQL Server: CREATE TABLE MyTable (col1 INT NULL, col2 INT NULL, col3 INT NULL); GO ALTER TABLE MyTable ADD CONSTRAINT CheckOnlyOneColumnIsNull CHECK ( LEN(CONCAT(col1-col1, col2-col2, col3-col3)) = 1 ) GO
How to set adaptive learning rate for GradientDescentOptimizer?
I am using TensorFlow to train a neural network. This is how I am initializing the GradientDescentOptimizer: init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) mse = tf.reduce_mean(tf.square(out - out_)) train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse) The thing here is that I don't know how to set an update rule for the learning rate or a decay value for that. How can I use an adaptive learning rate here?
First of all, tf.train.GradientDescentOptimizer is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the tf.train.AdagradOptimizer and the tf.train.AdamOptimizer, and these can be used as drop-in replacements. However, if you want to control the learning rate with otherwise-vanilla gradient descent, you can take advantage of the fact that the learning_rate argument to the tf.train.GradientDescentOptimizer constructor can be a Tensor object. This allows you to compute a different value for the learning rate in each step, for example: learning_rate = tf.placeholder(tf.float32, shape=[]) # ... train_step = tf.train.GradientDescentOptimizer( learning_rate=learning_rate).minimize(mse) sess = tf.Session() # Feed different values for learning rate to each training step. sess.run(train_step, feed_dict={learning_rate: 0.1}) sess.run(train_step, feed_dict={learning_rate: 0.1}) sess.run(train_step, feed_dict={learning_rate: 0.01}) sess.run(train_step, feed_dict={learning_rate: 0.01}) Alternatively, you could create a scalar tf.Variable that holds the learning rate, and assign it each time you want to change the learning rate.
What's the purpose of tf.app.flags in TensorFlow?
I am reading some example codes in Tensorflow, I found following code flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.') flags.DEFINE_integer('max_steps', 2000, 'Number of steps to run trainer.') flags.DEFINE_integer('hidden1', 128, 'Number of units in hidden layer 1.') flags.DEFINE_integer('hidden2', 32, 'Number of units in hidden layer 2.') flags.DEFINE_integer('batch_size', 100, 'Batch size. ' 'Must divide evenly into the dataset sizes.') flags.DEFINE_string('train_dir', 'data', 'Directory to put the training data.') flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data ' 'for unit testing.') in tensorflow/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py But I can't find any docs about this usage of tf.app.flags. And I found the implementation of this flags is in the tensorflow/tensorflow/python/platform/default/_flags.py Obviously, this tf.app.flags is somehow used to configure a network, so why is it not in the API docs? Can anyone explain what is going on here?
The tf.app.flags module is presently a thin wrapper around python-gflags, so the documentation for that project is the best resource for how to use it argparse, which implements a subset of the functionality in python-gflags. Note that this module is currently packaged as a convenience for writing demo apps, and is not technically part of the public API, so it may change in future. We recommend that you implement your own flag parsing using argparse or whatever library you prefer. EDIT: The tf.app.flags module is not in fact implemented using python-gflags, but it uses a similar API.
How to specify multiple return types using type-hints
I have a function in python that can either return a bool or a list. Is there a way to specify the return types using type hints. For example, Is this the correct way to do it? def foo(id) -> list or bool: ...
From the documentation class typing.Union Union type; Union[X, Y] means either X or Y. Hence the proper way to represent more than one return data type is def foo(client_id: str) -> Union[list,bool] But do note that typing is not enforced in Python3.5 >>> def foo(a:str) -> list: ... return("Works") ... >>> foo(1) 'Works' As you can see I am passing a int value and returning a str. However the __annotations__ will be set to the respective values. >>> foo.__annotations__ {'return': <class 'list'>, 'a': <class 'str'>} Please Go through PEP 483 for more about Type hints. Also see What are Type hints in Python 3.5? Kindly note that this is available only for Python 3.5 and upwards. This is mentioned clearly in PEP 484.
How could I use Batch Normalization in TensorFlow?
I would like to use Batch Normalization in TensorFlow, since I found it in the source code core/ops/nn_ops.cc. However, I did not find it documented on tensorflow.org. BN has different semantics in MLP and CNN, so I am not sure what exactly this BN does. I did not find a method called MovingMoments either. The C++ code is copied here for reference: REGISTER_OP("BatchNormWithGlobalNormalization") .Input("t: T") .Input("m: T") .Input("v: T") .Input("beta: T") .Input("gamma: T") .Output("result: T") .Attr("T: numbertype") .Attr("variance_epsilon: float") .Attr("scale_after_normalization: bool") .Doc(R"doc( Batch normalization. t: A 4D input Tensor. m: A 1D mean Tensor with size matching the last dimension of t. This is the first output from MovingMoments. v: A 1D variance Tensor with size matching the last dimension of t. This is the second output from MovingMoments. beta: A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. gamma: A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor. variance_epsilon: A small float number to avoid dividing by 0. scale_after_normalization: A bool indicating whether the resulted tensor needs to be multiplied with gamma. )doc");
Update July 2016 The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either contrib/layers, tflearn, or slim. Previous answer if you want to DIY: The documentation string for this has improved since the release - see the docs comment in the master branch instead of the one you found. It clarifies, in particular, that it's the output from tf.nn.moments. You can see a very simple example of its use in the batch_norm test code. For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!): """A helper class for managing batch normalization state. This class is designed to simplify adding batch normalization (http://arxiv.org/pdf/1502.03167v3.pdf) to your model by managing the state variables associated with it. Important use note: The function get_assigner() returns an op that must be executed to save the updated state. A suggested way to do this is to make execution of the model optimizer force it, e.g., by: update_assignments = tf.group(bn1.get_assigner(), bn2.get_assigner()) with tf.control_dependencies([optimizer]): optimizer = tf.group(update_assignments) """ import tensorflow as tf class ConvolutionalBatchNormalizer(object): """Helper class that groups the normalization logic and variables. Use: ewma = tf.train.ExponentialMovingAverage(decay=0.99) bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True) update_assignments = bn.get_assigner() x = bn.normalize(y, train=training?) (the output x will be batch-normalized). """ def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm): self.mean = tf.Variable(tf.constant(0.0, shape=[depth]), trainable=False) self.variance = tf.Variable(tf.constant(1.0, shape=[depth]), trainable=False) self.beta = tf.Variable(tf.constant(0.0, shape=[depth])) self.gamma = tf.Variable(tf.constant(1.0, shape=[depth])) self.ewma_trainer = ewma_trainer self.epsilon = epsilon self.scale_after_norm = scale_after_norm def get_assigner(self): """Returns an EWMA apply op that must be invoked after optimization.""" return self.ewma_trainer.apply([self.mean, self.variance]) def normalize(self, x, train=True): """Returns a batch-normalized version of x.""" if train: mean, variance = tf.nn.moments(x, [0, 1, 2]) assign_mean = self.mean.assign(mean) assign_variance = self.variance.assign(variance) with tf.control_dependencies([assign_mean, assign_variance]): return tf.nn.batch_norm_with_global_normalization( x, mean, variance, self.beta, self.gamma, self.epsilon, self.scale_after_norm) else: mean = self.ewma_trainer.average(self.mean) variance = self.ewma_trainer.average(self.variance) local_beta = tf.identity(self.beta) local_gamma = tf.identity(self.gamma) return tf.nn.batch_norm_with_global_normalization( x, mean, variance, local_beta, local_gamma, self.epsilon, self.scale_after_norm) Note that I called it a ConvolutionalBatchNormalizer because it pins the use of tf.nn.moments to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0. Feedback appreciated if you use it.
How to convert column with dtype as object to string in Pandas Dataframe
When I read a csv file to pandas dataframe, each column will be casted to datatypes on it's own. I have a column that was converted to object. I want to perform string operations for that column like splitting the values and creating a list. But no such operation is being performed because of it's dtype being object. Can anyone please let me know the way to convert all the items of a column to strings instead of objects? I tried all the possible ways but nothing worked. I used astype, str(), to_string etc a=lambda x: str(x).split(',') df['column'].apply(a) or df['column].astype(str)
Did you try assigning it back to the column? df['column'] = df['column'].astype('str') Referring to this question, the pandas dataframe stores the pointers to the strings and hence it is of type 'object'. As per the docs ,You could try: df['column_new'] = df['column'].str.split(',')
Subclassing matplotlib Text: manipulate properties of child artist
I am working on an implementation of a class for inline labeling of line objects. For this purpose I have made a subclass of the Text class which as a Line2D object as an attribute. The code in my previous post was maybe a bit lengthy, so I have isolated the problem here: from matplotlib.text import Text from matplotlib import pyplot as plt import numpy as np class LineText(Text): def __init__(self,line,*args,**kwargs): x_pos = line.get_xdata().mean() y_pos = line.get_ydata().mean() Text.__init__(self,x=x_pos,y=y_pos,*args,**kwargs) self.line = line def draw(self,renderer): self.line.set_color(self.get_color()) self.line.draw(renderer = renderer) Text.draw(self,renderer) if __name__ == '__main__': x = np.linspace(0,1,20) y = np.linspace(0,1,20) ax = plt.subplot(1,1,1) line = plt.plot(x,y,color = 'r')[0] linetext = LineText(line,text = 'abc') ax.add_artist(linetext) plt.show() The class takes the handle of a Line2D as returned from the plot function and in the .draw method, it makes some changes to the line. For illustration purposes I have here simply tried to change its colour. After changing the colour of the line, I call the lines draw. This does however not have the expected effect. When the figure is first drawn, there seems to be a superposition of a red and a black line. As soon as the figure is resized or otherwise forced to redraw, the line changes its colour as expected. The only way I have found so far to force the figure to be drawn correctly upon opening was to add a plt.draw() before the show(). This does however feel clumsy. Can I somehow force only the line object to be redrawn? Or am I doing it completely wrong? Thanks in advance.
The issue is that you're not updating the line until it is redrawn, I think this should work: class LineText(Text): def __init__(self,line,*args,**kwargs): x_pos = line.get_xdata().mean() y_pos = line.get_ydata().mean() Text.__init__(self,x=x_pos,y=y_pos,*args,**kwargs) self.line = line self.line.set_color(self.get_color()) plt.gca().add_artist(self.line) # You could also pass `ax` instead of calling `plt.gca()` plt.gca().add_artist(self) if __name__ == '__main__': x = np.linspace(0,1,20) y = np.linspace(0,1,20) ax = plt.subplot(1,1,1) line = plt.plot(x,y, 'r--', alpha=0.5)[0] linetext = LineText(line,text = 'abc') # ax.add_artist(linetext) # Artist is being added in `__init__` instead plt.show(block=False)
Can Pickle handle files larger than the RAM installed on my machine?
I'm using pickle for saving on disk my NLP classifier built with the TextBlob library. I'm using pickle after a lot of searches related to this question. At the moment I'm working locally and I have no problem loading the pickle file (which is 1.5Gb) with my i7 and 16gb RAM machine. But the idea is that my program, in the future, has to run on my server which only has 512Mb RAM installed. Can pickle handle such a large file or will I face memory issues? On my server I've got Python 3.5 installed and it is a Linux server (not sure which distribution). I'm asking because at the moment I can't access my server, so I can't just try and find out what happens, but at the same time I'm doubtful if I can keep this approach or I have to find other solutions.
Unfortunately this is difficult to accurately answer without testing it on your machine. Here are some initial thoughts: There is no inherent size limit that the Pickle module enforces, but you're pushing the boundaries of its intended use. It's not designed for individual large objects. However, you since you're using Python 3.5, you will be able to take advantage of PEP 3154 which adds better support for large objects. You should specify pickle.HIGHEST_PROTOCOL when you dump your data. You will likely have a large performance hit because you're trying to deal with an object that is 3x the size of your memory. Your system will probably start swapping, and possibly even thrashing. RAM is so cheap these days, bumping it up to at least 2GB should help significantly. To handle the swapping, make sure you have enough swap space available (a large swap partition if you're on Linux, or enough space for the swap file on your primary partition on Windows). As pal sch's comment shows, Pickle is not very friendly to RAM consumption during the pickling process, so you may have to deal with Python trying to get even more memory from the OS than the 1.5GB we may expect for your object. Given these considerations, I don't expect it to work out very well for you. I'd strongly suggest upgrading the RAM on your target machine to make this work.
What's the difference between loop.create_task, asyncio.async/ensure_future and Task?
I'm a little bit confused by some asyncio functions. I see there is BaseEventLoop.create_task(coro) function to schedule a co-routine. The documentation for create_task says its a new function and for compatibility we should use asyncio.async(coro) which by referring to docs again I see is an alias for asyncio.ensure_future(coro) which again schedules the execution of a co-routine. Meanwhile, I've been using Task(coro) for scheduling co-routine execution and that too seems to be working fine. so, what's the difference between all these?
As you've noticed, they all do the same thing. asyncio.async had to be replaced with asyncio.ensure_future because in Python >= 3.5, async has been made a keyword[1]. create_task's raison d'etre[2]: Third-party event loops can use their own subclass of Task for interoperability. In this case, the result type is a subclass of Task. And this also means you should not create a Task directly, because different event loops might have different ways of creating a "Task". Edit Another important difference is that in addition to accepting coroutines, ensure_future also accepts any awaitable object; create_task on the other hand just accepts coroutines.
Hash for lambda function in Python
I'm trying to get the hash of a lambda function. Why do I get two values (8746164008739 and -9223363290690767077)? Why is the hash from the lambda function not always one value? >>> fn = lambda: 1 >>> hash(fn) -9223363290690767077 >>> fn = lambda: 1 >>> hash(fn) 8746164008739 >>> fn = lambda: 1 >>> hash(fn) -9223363290690767077 >>> fn = lambda: 1 >>> hash(fn) 8746164008739 >>> fn = lambda: 1 >>> hash(fn) -9223363290690767077
Two objects are not guaranteed to hash to the same value unless they compare equal [1]. Python functions (including lambdas) don't compare equal even if they have identical code [2]. For example: >>> (lambda: 1) == (lambda: 1) False Implementation-wise, this behaviour is due to the fact that function objects don't provide their own equality operator. Instead, they inherit the default one that uses the object's identity, i.e. its address. From the documentation: If no __cmp__(), __eq__() or __ne__() operation is defined, class instances are compared by object identity (“address”). Here is what happens in your particular example: fn = lambda: 1 # New function is allocated at address A and stored in fn. fn = lambda: 1 # New function is allocated at address B and stored in fn. # The function at address A is garbage collected. fn = lambda: 1 # New function is allocated at address A and stored in fn. # The function at address B is garbage collected. fn = lambda: 1 # New function is allocated at address B and stored in fn. # The function at address A is garbage collected. ... Since address A is always hashed to one value, and address B to another, you are seeing hash(fn) alternate between the two values. This alternating behaviour is, however, an implementation artefact and could change one day if, for example, the garbage collector were made to behave slightly differently. The following insightful note has been contributed by @ruakh: It is worth noting that it's not possible to write a general process for determining if two functions are equivalent. (This is a consequence of the undecidability of the halting problem.) Furthermore, two Python functions can behave differently even if their code is identical (since they may be closures referring to distinct-but-identically-named variables). So it makes sense that Python functions don't overload the equality operator: there's no way to implement anything better than the default object-identity comparison. [1] The converse is generally not true: two objects that compare unequal can have the same hash value. This is called a hash collision. [2] Calling your lambdas and then hashing the result would of course always give the same value since hash(1) is always the same within one program: >>> (lambda: 1)() == (lambda: 1)() True
Is this time complexity actually O(n^2)?
I am working on a problem out of CTCI. The third problem of chapter 1 has you take a string such as 'Mr John Smith ' and asks you to replace the intermediary spaces with %20: 'Mr%20John%20Smith' The author offers this solution in Python, calling it O(n): def urlify(string, length): '''function replaces single spaces with %20 and removes trailing spaces''' counter = 0 output = '' for char in string: counter += 1 if counter > length: return output elif char == ' ': output = output + '%20' elif char != ' ': output = output + char return output My question: I understand that this is O(n) in terms of scanning through the actual string from left to right. But aren't strings in Python immutable? If I have a string and I add another string to it with the + operator, doesn't it allocate the necessary space, copy over the original, and then copy over the appending string? If I have a collection of n strings each of length 1, then that takes: 1 + 2 + 3 + 4 + 5 + ... + n = n(n+1)/2 or O(n^2) time, yes? Or am I mistaken in how Python handles appending? Alternatively, if you'd be willing to teach me how to fish: How would I go about finding this out for myself? I've been unsuccessful in my attempts to Google an official source. I found https://wiki.python.org/moin/TimeComplexity but this doesn't have anything on strings.
In CPython, the standard implementation of Python, there's an implementation detail that makes this usually O(n), implemented in the code the bytecode evaluation loop calls for + or += with two string operands. If Python detects that the left argument has no other references, it calls realloc to attempt to avoid a copy by resizing the string in place. This is not something you should ever rely on, because it's an implementation detail and because if realloc ends up needing to move the string frequently, performance degrades to O(n^2) anyway. Without the weird implementation detail, the algorithm is O(n^2) due to the quadratic amount of copying involved. Code like this would only make sense in a language with mutable strings, like C++, and even in C++ you'd want to use +=.
Is "x < y < z" faster than "x < y and y < z"?
From this page, we know that: Chained comparisons are faster than using the and operator. Write x < y < z instead of x < y and y < z. However, I got a different result testing the following code snippets: $ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y < z" 1000000 loops, best of 3: 0.322 usec per loop $ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y and y < z" 1000000 loops, best of 3: 0.22 usec per loop $ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y < z" 1000000 loops, best of 3: 0.279 usec per loop $ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y and y < z" 1000000 loops, best of 3: 0.215 usec per loop It seems that x < y and y < z is faster than x < y < z. Why? After searching some posts in this site (like this one) I know that "evaluated only once" is the key for x < y < z, however I'm still confused. To do further study, I disassembled these two functions using dis.dis: import dis def chained_compare(): x = 1.2 y = 1.3 z = 1.1 x < y < z def and_compare(): x = 1.2 y = 1.3 z = 1.1 x < y and y < z dis.dis(chained_compare) dis.dis(and_compare) And the output is: ## chained_compare ## 4 0 LOAD_CONST 1 (1.2) 3 STORE_FAST 0 (x) 5 6 LOAD_CONST 2 (1.3) 9 STORE_FAST 1 (y) 6 12 LOAD_CONST 3 (1.1) 15 STORE_FAST 2 (z) 7 18 LOAD_FAST 0 (x) 21 LOAD_FAST 1 (y) 24 DUP_TOP 25 ROT_THREE 26 COMPARE_OP 0 (<) 29 JUMP_IF_FALSE_OR_POP 41 32 LOAD_FAST 2 (z) 35 COMPARE_OP 0 (<) 38 JUMP_FORWARD 2 (to 43) >> 41 ROT_TWO 42 POP_TOP >> 43 POP_TOP 44 LOAD_CONST 0 (None) 47 RETURN_VALUE ## and_compare ## 10 0 LOAD_CONST 1 (1.2) 3 STORE_FAST 0 (x) 11 6 LOAD_CONST 2 (1.3) 9 STORE_FAST 1 (y) 12 12 LOAD_CONST 3 (1.1) 15 STORE_FAST 2 (z) 13 18 LOAD_FAST 0 (x) 21 LOAD_FAST 1 (y) 24 COMPARE_OP 0 (<) 27 JUMP_IF_FALSE_OR_POP 39 30 LOAD_FAST 1 (y) 33 LOAD_FAST 2 (z) 36 COMPARE_OP 0 (<) >> 39 POP_TOP 40 LOAD_CONST 0 (None) It seems that the x < y and y < z has less dissembled commands than x < y < z. Should I consider x < y and y < z faster than x < y < z? Tested with Python 2.7.6 on an Intel(R) Xeon(R) CPU E5640 @ 2.67GHz.
The difference is that in x < y < z y is only evaluated once. This does not make a large difference if y is a variable, but it does when it is a function call, which takes some time to compute. from time import sleep def y(): sleep(.2) return 1.3 %timeit 1.2 < y() < 1.8 10 loops, best of 3: 203 ms per loop %timeit 1.2 < y() and y() < 1.8 1 loops, best of 3: 405 ms per loop
SKlearn import MLPClassifier fails
I am trying to use the multilayer perceptron from scikit-learn in python. My problem is, that the import is not working. All other modules from scikit-learn are working fine. from sklearn.neural_network import MLPClassifier Import Error: cannot import name MLPClassifier I'm using the Python Environment Python64-bit 3.4 in Visual Studio 2015. I installed sklearn over the console with: conda install scikit-learn I also installed numpy and pandas. After I had the error above I also installed scikit-neuralnetwork with: pip install scikit-neuralnetwork The installed scikit-learn version is 0.17. What have I done wrong? Am I missing an installation? ----- EDIT ---- In addition to the answer of tttthomasssss, I found the solution on how to install the sknn library for neuronal networks. I followed this tutorial. Do the following steps: pip install scikit-neuralnetwork download and install the GCC compiler install mingw with conda install mingw libpython You can use the sknn library after.
MLPClassifier is not yet available in scikit-learn v0.17 (as of 1 Dec 2015). If you really want to use it you could clone 0.18dev (however, I don't know how stable this branch currently is).
How does the min/max function on a nested list work?
Lets say, there is a nested list, like: my_list = [[1, 2, 21], [1, 3], [1, 2]] When the function min() is called on this: min(my_list) The output received is [1, 2] Why and How does it work? What are some use cases of it?
How are lists and other sequences compared in Python? Lists (and other sequences) in Python are compared lexicographically and not based on any other parameter. Sequence objects may be compared to other objects with the same sequence type. The comparison uses lexicographical ordering: first the first two items are compared, and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared, and so on, until either sequence is exhausted. What is lexicographic sorting? From the Wikipedia page on lexicographic sorting lexicographic or lexicographical order (also known as lexical order, dictionary order, alphabetical order or lexicographic(al) product) is a generalization of the way the alphabetical order of words is based on the alphabetical order of their component letters. The min function returns the smallest value in the iterable. So the lexicographic value of [1,2] is the least in that list. You can check by using [1,2,21] >>> my_list=[[1,2,21],[1,3],[1,2]] >>> min(my_list) [1, 2] What is happening in this case of min? Going element wise on my_list, firstly [1,2,21] and [1,3]. Now from the docs If two items to be compared are themselves sequences of the same type, the lexicographical comparison is carried out recursively. Thus the value of [1,1,21] is less than [1,3], because the second element of [1,3], which is, 3 is lexicographically higher than the value of the second element of [1,1,21], which is, 1. Now comparing [1,2] and [1,2,21], and adding another reference from the docs If one sequence is an initial sub-sequence of the other, the shorter sequence is the smaller (lesser) one. [1,2] is an initial sub-sequence of [1,2,21]. Therefore the value of [1,2] on the whole is smaller than that of [1,2,21]. Hence [1,2] is returned as the output. This can be validated by using the sorted function >>> sorted(my_list) [[1, 2], [1, 2, 21], [1, 3]] What if the list has multiple minimum elements? If the list contains duplicate min elements the first is returned >>> my_list=[[1,2],[1,2]] >>> min(my_list) [1, 2] This can be confirmed using the id function call >>> my_list=[[1,2],[1,2]] >>> [id(i) for i in my_list] [140297364849368, 140297364850160] >>> id(min(my_list)) 140297364849368 What do I need to do to prevent lexicographic comparison in min? If the required comparison is not lexicographic then the key argument can be used (as mentioned by Padraic) The min function has an additional optional argument called key. The key argument takes a function. The optional key argument specifies a one-argument ordering function like that used for list.sort(). The key argument, if supplied, must be in keyword form (for example, min(a,b,c,key=func)). For example, if we need the smallest element by length, we need to use the len function. >>> my_list=[[1,2,21],[1,3],[1,2]] >>> min(my_list,key=len) # Notice the key argument [1, 3] As we can see the first shortest element is returned here. What if the list is heterogeneous? Until Python2 If the list is heterogeneous type names are considered for ordering, check Comparisions, Objects of different types except numbers are ordered by their type names Hence if you put an int and a list there you will get the integer value as the smallest as i is of lower value than l. Similarly '1' would be of higher value than both of this. >>> my_list=[[1,1,21],1,'1'] >>> min(my_list) 1 Python3 and onwards However this confusing technique was removed in Python3. It now raises a TypeError. Read What's new in Python 3.0 The ordering comparison operators (<, <=, >=, >) raise a TypeError exception when the operands don’t have a meaningful natural ordering. Thus, expressions like 1 < '', 0 > None or len <= len are no longer valid, and e.g. None < None raises TypeError instead of returning False. A corollary is that sorting a heterogeneous list no longer makes sense – all the elements must be comparable to each other. >>> my_list=[[1,1,21],1,'1'] >>> min(my_list) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: int() < list() But it works for Comparable types, For example >>> my_list=[1,2.0] >>> min(my_list) 1 Here we can see that the list contains float values and int values. But as float and int are comparable types, min function works in this case.
Number list with no repeats and ordered
This code returns a list [0,0,0] to [9,9,9], which produces no repeats and each element is in order from smallest to largest. def number_list(): b=[] for position1 in range(10): for position2 in range(10): for position3 in range(10): if position1<=position2 and position2<=position3: b.append([position1, position2, position3]) return b Looking for a shorter and better way to write this code without using multiple variables (position1, position2, position3), instead only using one variable i. Here is my attempt at modifying the code, but I'm stuck at implementing the if statements: def number_list(): b=[] for i in range(1000): b.append(map(int, str(i).zfill(3))) return b
On the same note as the other itertools answer, there is another way with combinations_with_replacement: list(itertools.combinations_with_replacement(range(10), 3))
how to set different PYTHONPATH variables for python3 and python2 respectively
I want to add a specific library path only to python2. After adding export PYTHONPATH="/path/to/lib/" to my .bashrc, however, executing python3 gets the error: Your PYTHONPATH points to a site-packages dir for Python 2.x but you are running Python 3.x! I think it is due to that python2 and python3 share the common PYTHONPATH variable. So, can I set different PYTHONPATH variables respectively for python2 and python3. If not, how can I add a library path exclusively to a particular version of python?
PYTHONPATH is somewhat of a hack as far as package management is concerned. A "pretty" solution would be to package your library and install it. This could sound more tricky than it is, so let me show you how it works. Let us assume your "package" has a single file named wow.py and you keep it in /home/user/mylib/wow.py. Create the file /home/user/mylib/setup.py with the following content: from setuptools import setup setup(name="WowPackage", packages=["."], ) That's it, now you can "properly install" your package into the Python distribution of your choice without the need to bother about PYTHONPATH. As far as "proper installation" is concerned, you have at least three options: "Really proper". Will copy your code to your python site-packages directory: $ python setup.py install "Development". Will only add a link from the python site-packages to /home/user/mylib. This means that changes to code in your directory will have effect. $ python setup.py develop "User". If you do not want to write to the system directories, you can install the package (either "properly" or "in development mode") to /home/user/.local directory, where Python will also find them on its own. For that, just add --user to the command. $ python setup.py install --user $ python setup.py develop --user To remove a package installed in development mode, do $ python setup.py develop -u or $ python setup.py develop -u --user To remove a package installed "properly", do $ pip uninstall WowPackage If your package is more interesting than a single file (e.g. you have subdirectories and such), just list those in the packages parameter of the setup function (you will need to list everything recursively, hence you'll use a helper function for larger libraries). Once you get a hang of it, make sure to read a more detailed manual as well. In the end, go and contribute your package to PyPI -- it is as simple as calling python setup.py sdist register upload (you'll need a PyPI username, though).
How to change dataframe column names in pyspark?
I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command: df.columns = new_column_name_list However, the same doesn't work in pyspark dataframes created using sqlContext. The only solution I could figure out to do this easily is the following: df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt") oldSchema = df.schema for i,k in enumerate(oldSchema.fields): k.name = new_column_name_list[i] df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema) This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema. Is there a better and more efficient way to do this like we do in pandas ? My spark version is 1.5.0
There are many ways to do that: Option 1. Using selectExpr. data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], ["Name", "askdaosdka"]) data.show() data.printSchema() # Output #+-------+----------+ #| Name|askdaosdka| #+-------+----------+ #|Alberto| 2| #| Dakota| 2| #+-------+----------+ #root # |-- Name: string (nullable = true) # |-- askdaosdka: long (nullable = true) df = data.selectExpr("Name as name", "askdaosdka as age") df.show() df.printSchema() # Output #+-------+---+ #| name|age| #+-------+---+ #|Alberto| 2| #| Dakota| 2| #+-------+---+ #root # |-- name: string (nullable = true) # |-- age: long (nullable = true) Option 2. Using withColumnRenamed, notice that this method allows you to "overwrite" the same column. oldColumns = data.schema.names newColumns = ["name", "age"] df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data) df.printSchema() df.show() Option 3. using alias, in Scala you can also use as. from pyspark.sql.functions import * data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age")) data.show() # Output #+-------+---+ #| name|age| #+-------+---+ #|Alberto| 2| #| Dakota| 2| #+-------+---+ Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables. sqlContext.registerDataFrameAsTable(data, "myTable") df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable") df2.show() # Output #+-------+---+ #| name|age| #+-------+---+ #|Alberto| 2| #| Dakota| 2| #+-------+---+
Send email task with correct context
This code is my celery worker script: from app import celery, create_app app = create_app('default') app.app_context().push() When I try to run the worker I will get into this error: File "/home/vagrant/myproject/venv/app/mymail.py", line 29, in send_email_celery msg.html = render_template(template + '.html', **kwargs) File "/home/vagrant/myproject/venv/local/lib/python2.7/site-packages/flask/templating.py", line 126, in render_template ctx.app.update_template_context(context) File "/home/vagrant/myproject/venv/local/lib/python2.7/site-packages/flask/app.py", line 716, in update_template_context context.update(func()) TypeError: 'NoneType' object is not iterable My question is how can I send the email task, when using a worker in celery. mymail.py from flask import current_app, render_template from flask.ext.mail import Message from . import mail, celery @celery.task def send_async_email_celery(msg): mail.send(msg) def send_email_celery(to, subject, template, **kwargs): app = current_app._get_current_object() msg = Message(subject, sender=app.config['MAIL_SENDER'], recipients=[to]) msg.html = render_template(template + '.html', **kwargs) send_async_email_celery.delay(msg) __init__ ... def create_app(config_name): app = Flask(__name__) app.config.from_object(config[config_name]) config[config_name].init_app(app) bootstrap.init_app(app) mail.init_app(app) db.init_app(app) login_manager.init_app(app) celery.conf.update(app.config) redis_store.init_app(app) from .users import main as main_blueprint app.register_blueprint(main_blueprint) return app Apparently there is some conflict between the blueprint and worker. Remove the blueprint is not an option, if possible, due the custom filters that I need to use in email template.
Finally found what is the reason of the problem after some debug with this code. I have a app_context_processor that will not return any result. @mod.app_context_processor def last_reputation_changes(): if current_user: #code return dict(reputation='xxx') When sending the email the current_user will need an else case to return something, since current_user from from flask.ext.login import current_user is not defined. Basically I only need something like this. def last_reputation_changes(): if current_user: #code return dict(reputation='xxx') else: return dict(reputation=None) So the problem is not related with celery, but with the flask login integration.
Why does heroku local:run wants to use the global python installation instead of the currently activated virtual env?
Using Heroku to deploy our Django application, everything seems to work by the spec, except the heroku local:run command. We oftentimes need to run commands through Django's manage.py file. Running them on the remote, as one-off dynos, works flawlessly. To run them locally, we try: heroku local:run python manage.py the_command Which fails, despite the fact that the current virtual env contains a Django installation, with ImportError: No module named django.core.management  Diagnostic through the python path Then heroku local:run which python returns: /usr/local/bin/python Whereas which python returns: /Users/myusername/MyProject/venv/bin/python #the correct value Is this a bug in Heroku local:run ? Or are we missunderstanding its expected behaviour ? And more importantly: is there a way to have heroku local:run use the currently installed virtual env ?
After contacting Heroku's support, we understood the problem. The support confirmed that heroku local:run should as expected use the currently active virtual env. The problem is a local configuration problem, due to our .bashrc content: heroku local:run sources .bashrc (and in our case, this was prepending $PATH with the path to the global Python installation, making it found before the virtual env's). On the other hand, heroku local does not source this file. To quote the last message from their support: heroku local:run runs the command using bash in interactive mode, which does read your profile, vs heroku local (aliased to heroku local:start) which does not run in interactive mode.
Django: Support for string view arguments to url() is deprecated and will be removed in Django 1.10
New python/Django user (and indeed new to SO): When trying to migrate my Django project, I get an error: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got main.views.home). Pass the callable instead. url(r'^$', 'main.views.home') Apparently the second argument can't be a string anymore. I came to create this code as it is through a tutorial at pluralsight.com that is teaching how to use Django with a previous version (I'm currently working with 1.9). The teacher instructs us to create urlpatterns in urls.py from the views we create in apps. He teaches us to create a urlpattern such as the following: from django.conf.urls import url from django.contrib import admin urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', 'main.views.home') ] to reference def home(request): return render(request, "main/home.html", {'message': 'You\'ve met with a terrible fate, haven\'t you?'}) #this message calls HTML, not shown, not important for question in the views.py of an app "main" that I created. If this method is being deprecated, how do I pass the view argument not as a string? If I just remove the quotes, as shown in the documentation (https://docs.djangoproject.com/en/1.9/topics/http/urls/), I get an error: NameError: name 'main' is not defined I tried to "import" views or main using the code presented in this documentation: from . import views or from . import main which gave me: ImportError: cannot import name 'views' and ImportError: cannot import name 'main' I believe I've traced this down to an import error, and am currently researching that.
I have found the answer to my question. It was indeed an import error. For Django 1.10, you now have to import the app's view.py, and then pass the second argument of url() without quotes. Here is my code now in urls.py: from django.conf.urls import url from django.contrib import admin import main.views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', main.views.home) ] I did not change anything in the app or view.py files. Props to @Rik Poggi for illustrating how to import in his answer to this question: Django - Import views from separate apps
Identifier normalization: Why is the micro sign converted into the Greek letter mu?
I just stumbled upon the following odd situation: >>> class Test: µ = 'foo' >>> Test.µ 'foo' >>> getattr(Test, 'µ') Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> getattr(Test, 'µ') AttributeError: type object 'Test' has no attribute 'µ' >>> 'µ'.encode(), dir(Test)[-1].encode() (b'\xc2\xb5', b'\xce\xbc') The character I entered is always the µ sign on the keyboard, but for some reason it gets converted. Why does this happen?
There are two different characters involved here. One is the MICRO SIGN, which is the one on the keyboard, and the other is GREEK SMALL LETTER MU. To understand what’s going on, we should take a look at how Python defines identifiers in the language reference: identifier ::= xid_start xid_continue* id_start ::= <all characters in general categories Lu, Ll, Lt, Lm, Lo, Nl, the underscore, and characters with the Other_ID_Start property> id_continue ::= <all characters in id_start, plus characters in the categories Mn, Mc, Nd, Pc and others with the Other_ID_Continue property> xid_start ::= <all characters in id_start whose NFKC normalization is in "id_start xid_continue*"> xid_continue ::= <all characters in id_continue whose NFKC normalization is in "id_continue*"> Both our characters, MICRO SIGN and GREEK SMALL LETTER MU, are part of the Ll unicode group (lowercase letters), so both of them can be used at any position in an identifier. Now note that the definition of identifier actually refers to xid_start and xid_continue, and those are defined as all characters in the respective non-x definition whose NFKC normalization results in a valid character sequence for an identifier. Python apparently only cares about the normalized form of identifiers. This is confirmed a bit below: All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. NFKC is a Unicode normalization that decomposes characters into individual parts. The MICRO SIGN decomposes into GREEK SMALL LETTER MU, and that’s exactly what’s going on there. There are a lot other characters that are also affected by this normalization. One other example is OHM SIGN which decomposes into GREEK CAPITAL LETTER OMEGA. Using that as an identifier gives a similar result, here shown using locals: >>> Ω = 'bar' >>> locals()['Ω'] Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> locals()['Ω'] KeyError: 'Ω' >>> [k for k, v in locals().items() if v == 'bar'][0].encode() b'\xce\xa9' >>> 'Ω'.encode() b'\xe2\x84\xa6' So in the end, this is just something that Python does. Unfortunately, there isn’t really a good way to detect this behavior, causing errors such as the one shown. Usually, when the identifier is only referred to as an identifier, i.e. it’s used like a real variable or attribute, then everything will be fine: The normalization runs every time, and the identifier is found. The only problem is with string-based access. Strings are just strings, of course there is no normalization happening (that would be just a bad idea). And the two ways shown here, getattr and locals, both operate on dictionaries. getattr() accesses an object’s attribute via the object’s __dict__, and locals() returns a dictionary. And in dictionaries, keys can be any string, so it’s perfectly fine to have a MICRO SIGN or a OHM SIGN in there. In those cases, you need to remember to perform a normalization yourself. We can utilize unicodedata.normalize for this, which then also allows us to correctly get our value from inside locals() (or using getattr): >>> normalized_ohm = unicodedata.normalize('NFKC', 'Ω') >>> locals()[normalized_ohm] 'bar'
How can I convert a tensor into a numpy array in TensorFlow?
I know how to convert a numpy array into a tensor object with the function tf.convert_to_tensor(img.eval()). My problem is that after I apply some preprocessing to this tensors in terms of brightness, contrast, etc, I would like to view the resulting transformations to evaluate and tweak my parameters. How can I convert a tensor into a numpy array so I can show it as an image with PIL?
To convert back from tensor to numpy array you can simply run .eval() on the transformed tensor.
How can a Python list be sliced such that a column is moved to being a separate element column?
I have a list of the following form: [[0, 5.1, 3.5, 1.4, 0.2], [0, 4.9, 3.0, 1.4, 0.2], [0, 4.7, 3.2, 1.3, 0.2], [1, 4.6, 3.1, 1.5, 0.2], [1, 5.0, 3.6, 1.4, 0.2], [1, 5.4, 3.9, 1.7, 0.4], [1, 4.6, 3.4, 1.4, 0.3]] I want to slice out the first column and add it as a new element to each row of data (so at each odd position in the list), changing it to the following form: [[5.1, 3.5, 1.4, 0.2], [0], [4.9, 3.0, 1.4, 0.2], [0], [4.7, 3.2, 1.3, 0.2], [0], [4.6, 3.1, 1.5, 0.2], [1], [5.0, 3.6, 1.4, 0.2], [1], [5.4, 3.9, 1.7, 0.4], [1], [4.6, 3.4, 1.4, 0.3], [1],] How could I do this? So far, I have extracted the necessary information in the following ways: targets = [element[0] for element in dataset] features = dataset[1:]
Try indexing and then get flattened list- i used list comprehension for flattening. >>>l=[[0, 5.1, 3.5, 1.4, 0.2], [0, 4.9, 3.0, 1.4, 0.2], [0, 4.7, 3.2, 1.3, 0.2], [1, 4.6, 3.1, 1.5, 0.2], [1, 5.0, 3.6, 1.4, 0.2], [1, 5.4, 3.9, 1.7, 0.4], [1, 4.6, 3.4, 1.4, 0.3]] >>>[[i[1:],[i[0]]] for i in l]#get sliced list of lists >>>[[[5.1, 3.5, 1.4, 0.2], [0]], [[4.9, 3.0, 1.4, 0.2], [0]], [[4.7, 3.2, 1.3, 0.2], [0]], [[4.6, 3.1, 1.5, 0.2], [1]], [[5.0, 3.6, 1.4, 0.2], [1]], [[5.4, 3.9, 1.7, 0.4], [1]], [[4.6, 3.4, 1.4, 0.3], [1]]] >>>d=[[i[1:],[i[0]]] for i in l] >>>[item for sublist in d for item in sublist]#flatten list d >>>[[5.1, 3.5, 1.4, 0.2], [0], [4.9, 3.0, 1.4, 0.2], [0], [4.7, 3.2, 1.3, 0.2], [0], [4.6, 3.1, 1.5, 0.2], [1], [5.0, 3.6, 1.4, 0.2], [1], [5.4, 3.9, 1.7, 0.4], [1], [4.6, 3.4, 1.4, 0.3], [1]] Just oneliner alternative- [item for sublist in [[i[1:],[i[0]]] for i in l] for item in sublist] #Here l is that list
Why does Python "preemptively" hang when trying to calculate a very large number?
I've asked this question before about killing a process that uses too much memory, and I've got most of a solution worked out. However, there is one problem: calculating massive numbers seems to be untouched by the method I'm trying to use. This code below is intended to put a 10 second CPU time limit on the process. import resource import os import signal def timeRanOut(n, stack): raise SystemExit('ran out of time!') signal.signal(signal.SIGXCPU, timeRanOut) soft,hard = resource.getrlimit(resource.RLIMIT_CPU) print(soft,hard) resource.setrlimit(resource.RLIMIT_CPU, (10, 100)) y = 10**(10**10) What I expect to see when I run this script (on a Unix machine) is this: -1 -1 ran out of time! Instead, I get no output. The only way I get output is with Ctrl + C, and I get this if I Ctrl + C after 10 seconds: ^C-1 -1 ran out of time! CPU time limit exceeded If I Ctrl + C before 10 seconds, then I have to do it twice, and the console output looks like this: ^C-1 -1 ^CTraceback (most recent call last): File "procLimitTest.py", line 18, in <module> y = 10**(10**10) KeyboardInterrupt In the course of experimenting and trying to figure this out, I've also put time.sleep(2) between the print and large number calculation. It doesn't seem to have any effect. If I change y = 10**(10**10) to y = 10**10, then the print and sleep statements work as expected. Adding flush=True to the print statement or sys.stdout.flush() after the print statement don't work either. Why can I not limit CPU time for the calculation of a very large number? How can I fix or at least mitigate this? Additional information: Python version: 3.3.5 (default, Jul 22 2014, 18:16:02) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] Linux information: Linux web455.webfaction.com 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
TLDR: Python precomputes constants in the code. If any very large number is calculated with at least one intermediate step, the process will be CPU time limited. It took quite a bit of searching, but I have discovered evidence that Python 3 does precompute constant literals that it finds in the code before evaluating anything. One of them is this webpage: A Peephole Optimizer for Python. I've quoted some of it below. ConstantExpressionEvaluator This class precomputes a number of constant expressions and stores them in the function's constants list, including obvious binary and unary operations and tuples consisting of just constants. Of particular note is the fact that complex literals are not represented by the compiler as constants but as expressions, so 2+3j appears as LOAD_CONST n (2) LOAD_CONST m (3j) BINARY_ADD This class converts those to LOAD_CONST q (2+3j) which can result in a fairly large performance boost for code that uses complex constants. The fact that 2+3j is used as an example very strongly suggests that not only small constants are being precomputed and cached, but also any constant literals in the code. I also found this comment on another Stack Overflow question (Are constant computations cached in Python?): Note that for Python 3, the peephole optimizer does precompute the 1/3 constant. (CPython specific, of course.) – Mark Dickinson Oct 7 at 19:40 These are supported by the fact that replacing y = 10**(10**10) with this also hangs, even though I never call the function! def f(): y = 10**(10**10) The good news Luckily for me, I don't have any such giant literal constants in my code. Any computation of such constants will happen later, which can be and is limited by the CPU time limit. I changed y = 10**(10**10) to this, x = 10 print(x) y = 10**x print(y) z = 10**y print(z) and got this output, as desired! -1 -1 10 10000000000 ran out of time! The moral of the story: Limiting a process by CPU time or memory consumption (or some other method) will work if there is not a large literal constant in the code that Python tries to precompute.
Django upgrading to 1.9 error "AppRegistryNotReady: Apps aren't loaded yet."
When upgraded to django 1.9 from 1.8 I got this error. I checked answers for similar questions, but I didn't think this is an issue with any 3rd party packages or apps. Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line utility.execute() File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 342, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 176, in fetch_command commands = get_commands() File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/utils/lru_cache.py", line 100, in wrapper result = user_function(*args, **kwds) File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 71, in get_commands for app_config in reversed(list(apps.get_app_configs())): File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/apps/registry.py", line 137, in get_app_configs self.check_apps_ready() File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. I'd modified the Installed apps for 'django.contrib.auth'.
Try to add this lines to the top of your settings file: import django django.setup() And if this will not help you try to remove third-party applications from your installed apps list one-by-one.
Difference between numpy dot() and Python 3.5+ matrix multiplication @
I recently moved to Python 3.5 and noticed the new matrix multiplication operator (@) sometimes behaves differently from the numpy dot operator. In example, for 3d arrays: import numpy as np a = np.random.rand(8,13,13) b = np.random.rand(8,13,13) c = a @ b # Python 3.5+ d = np.dot(a, b) The @ operator returns an array of shape: c.shape (8, 13, 13) while the np.dot() function returns: d.shape (8, 13, 8, 13) How can I reproduce the same result with numpy dot? Are there any other significant differences?
The @ operator calls the array's __matmul__ method, not dot. This method is also present in the API as the function np.matmul. >>> a = np.random.rand(8,13,13) >>> b = np.random.rand(8,13,13) >>> np.matmul(a, b).shape (8, 13, 13) From the documentation: matmul differs from dot in two important ways. Multiplication by scalars is not allowed. Stacks of matrices are broadcast together as if the matrices were elements. The last point makes it clear that dot and matmul methods behave differently when passed 3D (or higher dimensional) arrays. Quoting from the documentation some more: For matmul: If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. For np.dot: For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b
'is' operator behaves unexpectedly with non-cached integers
When playing around with the Python interpreter, I stumbled upon this conflicting case regarding the is operator: If the evaluation takes place in the function it returns True, if it is done outside it returns False. >>> def func(): ... a = 1000 ... b = 1000 ... return a is b ... >>> a = 1000 >>> b = 1000 >>> a is b, func() (False, True) Since the is operator evaluates the id()'s for the objects involved, this means that a and b point to the same int instance when declared inside of function func but, on the contrary, they point to a different object when outside of it. Why is this so? Note: I am aware of the difference between identity (is) and equality (==) operations as described in Understanding Python's "is" operator. In addition, I'm also aware about the caching that is being performed by python for the integers in range [-5, 256] as described in "is" operator behaves unexpectedly with integers. This isn't the case here since the numbers are outside that range and I do want to evaluate identity and not equality.
tl;dr: As the reference manual states: A block is a piece of Python program text that is executed as a unit. The following are blocks: a module, a function body, and a class definition. Each command typed interactively is a block. This is why, in the case of a function, you have a single code block which contains a single object for the numeric literal 1000, so id(a) == id(b) will yield True. In the second case, you have two distinct code objects each with their own different object for the literal 1000 so id(a) != id(b). Take note that this behavior doesn't manifest with int literals only, you'll get similar results with, for example, float literals (see here). Of course, comparing objects should (except for explicit is None tests ) should always be done with the equality operator == and not is. Everything stated here applies to the most popular implementation of Python, CPython. Other implementations might differ so no assumptions should be made when using them. Longer Answer: To get a little clearer view and additionally verify this seemingly odd behaviour we can look directly in the code objects for each of these cases using the dis module. For the function func: Along with all other attributes, function objects also have a __code__ hook to allow you to peek into the compiled bytecode for that function. Using dis.code_info we can get a nice pretty view of all stored attributes in a code object for a given function: >>> print(dis.code_info(func)) Name: func Filename: <stdin> Argument count: 0 Kw-only arguments: 0 Number of locals: 2 Stack size: 2 Flags: OPTIMIZED, NEWLOCALS, NOFREE Constants: 0: None 1: 1000 Variable names: 0: a 1: b We're only interested in the Constants entry for function func. In it, we can see that we have two values, None (always present) and 1000. We only have a single int instance that represents the constant 1000. This is the value that a and b are going to be assigned to when the function is invoked. Accessing this value is easy via func.__code__.co_consts[1] and so, another way to view our a is b evaluation in the function would be like so: >>> id(func.__code__.co_consts[1]) == id(func.__code__.co_consts[1]) Which, ofcourse, will evaluate to True because we're refering to the same object. For each interactive command: As noted previously, each interactive command is interpreted as a single code block: parsed, compiled and evaluated independently. We can get the code objects for each command via the compile built-in: >>> com1 = compile("a=1000", filename="", mode="exec") >>> com2 = compile("b=1000", filename="", mode="exec") For each assignment statement, we will get a similar looking code object which looks like the following: >>> print(dis.code_info(com1)) Name: <module> Filename: Argument count: 0 Kw-only arguments: 0 Number of locals: 0 Stack size: 1 Flags: NOFREE Constants: 0: 1000 1: None Names: 0: a The same command for com2 looks the same but, has a fundamental difference, each of the code objects com1 and com2 have different int instances representing the literal 1000. This is why, in this case, when we do a is b via the co_consts argument, we actually get: >>> id(com1.co_consts[0]) == id(com2.co_consts[0]) False Which agrees with what we actually got. Different code objects, different contents. Note: I was somewhat curious as to how exactly this happens in the source code and after digging through it I believe I finally found it. During compilations phase the co_consts attribute is represented by a dictionary object. In compile.c we can actually see the initialization: /* snippet for brevity */ u->u_lineno = 0; u->u_col_offset = 0; u->u_lineno_set = 0; u->u_consts = PyDict_New(); /* snippet for brevity */ During compilation this is checked for already existing constants. See @Raymond Hettinger's answer below for a bit more on this. Caveats: Chained statements will evaluate to an identity check of True It should be more clear now why exactly the following evaluates to True: >>> a = 1000; b = 1000; >>> a is b In this case, by chaining the two assignment commands together we tell the interpreter to compile these together. As in the case for the function object, only one object for the literal 1000 will be created resulting in a True value when evaluated. Execution on a module level yields True again: As previously mentioned, the reference manual states that: ... The following are blocks: a module ... So the same premise applies: we will have a single code object (for the module) and so, as a result, single values stored for each different literal. The same doesn't apply for mutable objects: Meaning that unless we explicitly initialize to the same mutable object (for example with a = b = []), the identity of the objects will never be equal, for example: a = []; b = [] a is b # always returns false Again, in the documentation this is specified: after a = 1; b = 1, a and b may or may not refer to the same object with the value one, depending on the implementation, but after c = []; d = [], c and d are guaranteed to refer to two different, unique, newly created empty lists.
Why do many examples use "fig, ax = plt.subplots()" in Matplotlib/pyplot/python
I'm learning to use matplotlib by studying examples, and a lot of examples seem to include a line like the following before creating a single plot... fig, ax = plt.subplots() Here are some examples... Modify tick label text http://matplotlib.org/examples/pylab_examples/boxplot_demo2.html I see this function used a lot, even though the example is only attempting to create a single chart. Is there some other advantage? The official demo for subplots() also uses f, ax = subplots when creating a single chart, and it only ever references ax after that. This is the code they use. # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot')
plt.subplots() is a function that returns a tuple containing a figure and axes object(s). Thus when using fig, ax = plt.subplots() you unpack this tuple into the variables fig and ax. Having fig is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with fig.savefig('yourfilename.png'). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus: fig, ax = plt.subplots() is more concise than this: fig = plt.figure() ax = fig.add_subplot(111)
Did something about `namedtuple` change in 3.5.1?
On Python 3.5.0: >>> from collections import namedtuple >>> cluster = namedtuple('Cluster', ['a', 'b']) >>> c = cluster(a=4, b=9) >>> c Cluster(a=4, b=9) >>> vars(c) OrderedDict([('a', 4), ('b', 9)]) On Python 3.5.1: >>> from collections import namedtuple >>> cluster = namedtuple('Cluster', ['a', 'b']) >>> c = cluster(a=4, b=9) >>> c Cluster(a=4, b=9) >>> vars(c) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: vars() argument must have __dict__ attribute Seems like something about namedtuple changed (or maybe it was something about vars()?). Was this intentional? Are we not supposed to use this pattern for converting named tuples into dictionaries anymore?
Per Python bug #24931: [__dict__] disappeared because it was fundamentally broken in Python 3, so it had to be removed. Providing __dict__ broke subclassing and produced odd behaviors. Revision that made the change Specifically, subclasses without __slots__ defined would behave weirdly: >>> Cluster = namedtuple('Cluster', 'x y') >>> class Cluster2(Cluster): pass >>> vars(Cluster(1,2)) OrderedDict([('x', 1), ('y', 2)]) >>> vars(Cluster2(1,2)) {} Use ._asdict().
What are the differences between mysql-connector-python, mysql-connector-python-rf and mysql-connector-repackaged?
I'd like to use the mysql-connector library for python 3. I could use pymysql instead, but mysql-connector already has a connection pool implementation, while pymysql doesn't seem to have one. So this would be less code for me to write. However, when I do $ pip3 search mysql-connector I find that these 3 libraries are available: mysql-connector-repackaged - MySQL driver written in Python mysql-connector-python-rf - MySQL driver written in Python mysql-connector-python - MySQL driver written in Python This is very confusing. Anybody knows which one I should use and why? Thanks for your help.
The main differences between them are: mysql-connector-repackaged: is old, do not use it mysql-connector-python 2.0.4: is the original uploaded by MySQL. But it has the problem that does not works with Django >= 1.8. MySQL did not upload yet their stable version 2.1.3 to this repo. mysql-connector-python-rf 2.1.3: is the solution to all your problems if you use Django >= 1.8
Normal equation and Numpy 'least-squares', 'solve' methods difference in regression?
I am doing linear regression with multiple variables/features. I try to get thetas (coefficients) by using normal equation method (that uses matrix inverse), Numpy least-squares numpy.linalg.lstsq tool and np.linalg.solve tool. In my data I have n = 143 features and m = 13000 training examples. For normal equation method with regularization I use this formula: Sources: Regularization (Andrew Ng, Stanford) Normal equations (Andrew Ng, Stanford) Regularization is used to solve the potential problem of matrix non-invertibility (XtX matrix may become singular/non-invertible) Data preparation code: import pandas as pd import numpy as np path = 'DB2.csv' data = pd.read_csv(path, header=None, delimiter=";") data.insert(0, 'Ones', 1) cols = data.shape[1] X = data.iloc[:,0:cols-1] y = data.iloc[:,cols-1:cols] IdentitySize = X.shape[1] IdentityMatrix= np.zeros((IdentitySize, IdentitySize)) np.fill_diagonal(IdentityMatrix, 1) For least squares method I use Numpy's numpy.linalg.lstsq. Here is Pyhton code: lamb = 1 th = np.linalg.lstsq(X.T.dot(X) + lamb * IdentityMatrix, X.T.dot(y))[0] Also I used np.linalg.solve tool of numpy: lamb = 1 XtX_lamb = X.T.dot(X) + lamb * IdentityMatrix XtY = X.T.dot(y) x = np.linalg.solve(XtX_lamb, XtY); For normal equation I use: lamb = 1 xTx = X.T.dot(X) + lamb * IdentityMatrix XtX = np.linalg.inv(xTx) XtX_xT = XtX.dot(X.T) theta = XtX_xT.dot(y) In all methods I used regularization. Here is results (theta coefficients) to see difference between these three approaches: Normal equation: np.linalg.lstsq np.linalg.solve [-27551.99918303] [-27551.95276154] [-27551.9991855] [-940.27518383] [-940.27520138] [-940.27518383] [-9332.54653964] [-9332.55448263] [-9332.54654461] [-3149.02902071] [-3149.03496582] [-3149.02900965] [-1863.25125909] [-1863.2631435] [-1863.25126344] [-2779.91105618] [-2779.92175308] [-2779.91105347] [-1226.60014026] [-1226.61033117] [-1226.60014192] [-920.73334259] [-920.74331432] [-920.73334194] [-6278.44238081] [-6278.45496955] [-6278.44237847] [-2001.48544938] [-2001.49566981] [-2001.48545349] [-715.79204971] [-715.79664124] [-715.79204921] [ 4039.38847472] [ 4039.38302499] [ 4039.38847515] [-2362.54853195] [-2362.55280478] [-2362.54853139] [-12730.8039209] [-12730.80866036] [-12730.80392076] [-24872.79868125] [-24872.80203459] [-24872.79867954] [-3402.50791863] [-3402.5140501] [-3402.50793382] [ 253.47894001] [ 253.47177732] [ 253.47892472] [-5998.2045186] [-5998.20513905] [-5998.2045184] [ 198.40560401] [ 198.4049081] [ 198.4056042] [ 4368.97581411] [ 4368.97175688] [ 4368.97581426] [-2885.68026222] [-2885.68154407] [-2885.68026205] [ 1218.76602731] [ 1218.76562838] [ 1218.7660275] [-1423.73583813] [-1423.7369068] [-1423.73583793] [ 173.19125007] [ 173.19086525] [ 173.19125024] [-3560.81709538] [-3560.81650156] [-3560.8170952] [-142.68135768] [-142.68162508] [-142.6813575] [-2010.89489111] [-2010.89601322] [-2010.89489092] [-4463.64701238] [-4463.64742877] [-4463.64701219] [ 17074.62997704] [ 17074.62974609] [ 17074.62997723] [ 7917.75662561] [ 7917.75682048] [ 7917.75662578] [-4234.16758492] [-4234.16847544] [-4234.16758474] [-5500.10566329] [-5500.106558] [-5500.10566309] [-5997.79002683] [-5997.7904842] [-5997.79002634] [ 1376.42726683] [ 1376.42629704] [ 1376.42726705] [ 6056.87496151] [ 6056.87452659] [ 6056.87496175] [ 8149.0123667] [ 8149.01209157] [ 8149.01236827] [-7273.3450484] [-7273.34480382] [-7273.34504827] [-2010.61773247] [-2010.61839251] [-2010.61773225] [-7917.81185096] [-7917.81223606] [-7917.81185084] [ 8247.92773739] [ 8247.92774315] [ 8247.92773722] [ 1267.25067823] [ 1267.24677734] [ 1267.25067832] [ 2557.6208133] [ 2557.62126916] [ 2557.62081337] [-5678.53744654] [-5678.53820798] [-5678.53744647] [ 3406.41697822] [ 3406.42040997] [ 3406.41697836] [-8371.23657044] [-8371.2361594] [-8371.23657035] [ 15010.61728285] [ 15010.61598236] [ 15010.61728304] [ 11006.21920273] [ 11006.21711213] [ 11006.21920284] [-5930.93274062] [-5930.93237071] [-5930.93274048] [-5232.84459862] [-5232.84557665] [-5232.84459848] [ 3196.89304277] [ 3196.89414431] [ 3196.8930428] [ 15298.53309912] [ 15298.53496877] [ 15298.53309919] [ 4742.68631183] [ 4742.6862601] [ 4742.68631172] [ 4423.14798495] [ 4423.14765013] [ 4423.14798546] [-16153.50854089] [-16153.51038489] [-16153.50854123] [-22071.50792741] [-22071.49808389] [-22071.50792408] [-688.22903323] [-688.2310229] [-688.22904006] [-1060.88119863] [-1060.8829114] [-1060.88120546] [-101.75750066] [-101.75776411] [-101.75750831] [ 4106.77311898] [ 4106.77128502] [ 4106.77311218] [ 3482.99764601] [ 3482.99518758] [ 3482.99763924] [-1100.42290509] [-1100.42166312] [-1100.4229119] [ 20892.42685103] [ 20892.42487476] [ 20892.42684422] [-5007.54075789] [-5007.54265501] [-5007.54076473] [ 11111.83929421] [ 11111.83734144] [ 11111.83928704] [ 9488.57342568] [ 9488.57158677] [ 9488.57341883] [-2992.3070786] [-2992.29295891] [-2992.30708529] [ 17810.57005982] [ 17810.56651223] [ 17810.57005457] [-2154.47389712] [-2154.47504319] [-2154.47390285] [-5324.34206726] [-5324.33913623] [-5324.34207293] [-14981.89224345] [-14981.8965674] [-14981.89224973] [-29440.90545197] [-29440.90465897] [-29440.90545704] [-6925.31991443] [-6925.32123144] [-6925.31992383] [ 104.98071593] [ 104.97886085] [ 104.98071152] [-5184.94477582] [-5184.9447972] [-5184.94477792] [ 1555.54536625] [ 1555.54254362] [ 1555.5453638] [-402.62443474] [-402.62539068] [-402.62443718] [ 17746.15769322] [ 17746.15458093] [ 17746.15769074] [-5512.94925026] [-5512.94980649] [-5512.94925267] [-2202.8589276] [-2202.86226244] [-2202.85893056] [-5549.05250407] [-5549.05416936] [-5549.05250669] [-1675.87329493] [-1675.87995809] [-1675.87329255] [-5274.27756529] [-5274.28093377] [-5274.2775701] [-5424.10246845] [-5424.10658526] [-5424.10247326] [-1014.70864363] [-1014.71145066] [-1014.70864845] [ 12936.59360437] [ 12936.59168749] [ 12936.59359954] [ 2912.71566077] [ 2912.71282628] [ 2912.71565599] [ 6489.36648506] [ 6489.36538259] [ 6489.36648021] [ 12025.06991281] [ 12025.07040848] [ 12025.06990358] [ 17026.57841531] [ 17026.56827742] [ 17026.57841044] [ 2220.1852193] [ 2220.18531961] [ 2220.18521579] [-2886.39219026] [-2886.39015388] [-2886.39219394] [-18393.24573629] [-18393.25888463] [-18393.24573872] [-17591.33051471] [-17591.32838012] [-17591.33051834] [-3947.18545848] [-3947.17487999] [-3947.18546459] [ 7707.05472816] [ 7707.05577227] [ 7707.0547217] [ 4280.72039079] [ 4280.72338194] [ 4280.72038435] [-3137.48835901] [-3137.48480197] [-3137.48836531] [ 6693.47303443] [ 6693.46528167] [ 6693.47302811] [-13936.14265517] [-13936.14329336] [-13936.14267094] [ 2684.29594641] [ 2684.29859601] [ 2684.29594183] [-2193.61036078] [-2193.63086307] [-2193.610366] [-10139.10424848] [-10139.11905454] [-10139.10426049] [ 4475.11569903] [ 4475.12288711] [ 4475.11569421] [-3037.71857269] [-3037.72118246] [-3037.71857265] [-5538.71349798] [-5538.71654224] [-5538.71349794] [ 8008.38521357] [ 8008.39092739] [ 8008.38521361] [-1433.43859633] [-1433.44181824] [-1433.43859629] [ 4212.47144667] [ 4212.47368097] [ 4212.47144686] [ 19688.24263706] [ 19688.2451694] [ 19688.2426368] [ 104.13434091] [ 104.13434349] [ 104.13434091] [-654.02451175] [-654.02493111] [-654.02451174] [-2522.8642551] [-2522.88694451] [-2522.86424254] [-5011.20385919] [-5011.22742915] [-5011.20384655] [-13285.64644021] [-13285.66951459] [-13285.64642763] [-4254.86406891] [-4254.88695873] [-4254.86405637] [-2477.42063206] [-2477.43501057] [-2477.42061727] [ 0.] [ 1.23691279e-10] [ 0.] [-92.79470071] [-92.79467095] [-92.79470071] [ 2383.66211583] [ 2383.66209637] [ 2383.66211583] [-10725.22892185] [-10725.22889937] [-10725.22892185] [ 234.77560283] [ 234.77560254] [ 234.77560283] [ 4739.22119578] [ 4739.22121432] [ 4739.22119578] [ 43640.05854156] [ 43640.05848841] [ 43640.05854157] [ 2592.3866707] [ 2592.38671547] [ 2592.3866707] [-25130.02819215] [-25130.05501178] [-25130.02819515] [ 4966.82173096] [ 4966.7946407] [ 4966.82172795] [ 14232.97930665] [ 14232.9529959] [ 14232.97930363] [-21621.77202422] [-21621.79840459] [-21621.7720272] [ 9917.80960029] [ 9917.80960571] [ 9917.80960029] [ 1355.79191536] [ 1355.79198092] [ 1355.79191536] [-27218.44185748] [-27218.46880642] [-27218.44185719] [-27218.04184348] [-27218.06875423] [-27218.04184318] [ 23482.80743869] [ 23482.78043029] [ 23482.80743898] [ 3401.67707434] [ 3401.65134677] [ 3401.67707463] [ 3030.36383274] [ 3030.36384909] [ 3030.36383274] [-30590.61847724] [-30590.63933424] [-30590.61847706] [-28818.3942685] [-28818.41520495] [-28818.39426833] [-25115.73726772] [-25115.7580278] [-25115.73726753] [ 77174.61695995] [ 77174.59548773] [ 77174.61696016] [-20201.86613672] [-20201.88871113] [-20201.86613657] [ 51908.53292209] [ 51908.53446495] [ 51908.53292207] [ 7710.71327865] [ 7710.71324194] [ 7710.71327865] [-16206.9785119] [-16206.97851993] [-16206.9785119] As you can see normal equation, least squares and np.linalg.solve tool methods give to some extent different results. The question is why these three approaches gives noticeably different results and which method gives more efficient and more accurate result? Assumption: Results of Normal equation method and results of np.linalg.solve are very close to each other. And results of np.linalg.lstsq differ from both of them. Since normal equation uses inverse we do not expect very accurate results of it and therefore results of np.linalg.solve tool also. Seem to be that better results are given by np.linalg.lstsq. Note: Under accuracy I meant how close these method's solutions to real coefficients. So basically I wanted to know wich of these methods is closer to real model. Update: As Dave Hensley mentioned: After the line np.fill_diagonal(IdentityMatrix, 1) this code IdentityMatrix[0,0] = 0 should be added. DB2.csv is available on DropBox: DB2.csv Full Python code is available on DropBox: Full code
Don't calculate matrix inverse to solve linear systems The professional algorithms don't solve for the matrix inverse. It's slow and introduces unnecessary error. It's not a disaster for small systems, but why do something suboptimal? Basically anytime you see the math written as: x = A^-1 * b you instead want: x = np.linalg.solve(A, b) In you case, you want something like: XtX_lamb = X.T.dot(X) + lamb * IdentityMatrix XtY = X.T.dot(Y) x = np.linalg.solve(XtX_lamb, XtY);
Tuple unpacking order changes values assigned
I think the two are identical. nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] print nums # [2, 1, 0] nums = [1, 2, 0] nums[0], nums[nums[0]] = nums[nums[0]], nums[0] print nums # [2, 2, 1] But the results are different. Why are the results different? (why is the second one that result?)
Prerequisites - 2 important Points Lists are mutable The main part in lists is that lists are mutable. It means that the values of lists can be changed. This is one of the reason why you are facing the trouble. Refer the docs for more info Order of Evaluation The other part is that while unpacking a tuple, the evaluation starts from left to right. Refer the docs for more info Introduction when you do a,b = c,d the values of c and d are first stored. Then starting from the left hand side, the value of a is first changed to c and then the value of b is changed to d. The catch here is that if there are any side effects to the location of b while changing the value of a, then d is assigned to the later b, which is the b affected by the side effect of a. Use Case Now coming to your problem In the first case, nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] nums[0] is initially 1 and nums[nums[0]] is 2 because it evaluates to nums[1]. Hence 1,2 is now stored into memory. Now tuple unpacking happens from left hand side, so nums[nums[0]] = nums[1] = 1 # NO side Effect. nums[0] = 2 hence print nums will print [2, 1, 0] However in this case nums = [1, 2, 0] nums[0], nums[nums[0]] = nums[nums[0]], nums[0] nums[nums[0]], nums[0] puts 2,1 on the stack just like the first case. However on the left hand side, that is nums[0], nums[nums[0]], the changing of nums[0] has a side effect as it is used as the index in nums[nums[0]]. Thus nums[0] = 2 nums[nums[0]] = nums[2] = 1 # NOTE THAT nums[0] HAS CHANGED nums[1] remains unchanged at value 2. hence print nums will print [2, 2, 1]
Can you fool isatty AND log stdout and stderr separately?
Problem So you want to log the stdout and stderr (separately) of a process or subprocess, without the output being different from what you'd see in the terminal if you weren't logging anything. Seems pretty simple no? Well unfortunately, it appears that it may not be possible to write a general solution for this problem, that works on any given process... Background Pipe redirection is one method to separate stdout and stderr, allowing you to log them individually. Unfortunately, if you change the stdout/err to a pipe, the process may detect the pipe is not a tty (because it has no width/height, baud rate, etc) and may change its behaviour accordingly. Why change the behaviour? Well, some developers make use of features of a terminal which don't make sense if you are writing out to a file. For example, loading bars often require the terminal cursor to be moved back to the beginning of the line and the previous loading bar to be overwritten with a bar of a new length. Also colour and font weight can be displayed in a terminal, but in a flat ASCII file they can not. If you were to write such a program's stdout directly to a file, that output would contain all the terminal ANSI escape codes, rather than properly formatted output. The developer therefore implements some sort of "isatty" check before writing anything to the stdout/err, so it can give a simpler output for files if that check returns false. The usual solution here is to trick such programs into thinking the pipes are actually ttys by using a pty - a bidirectional pipe that also has width, height, etc. You redirect all inputs/outputs of the process to this pty, and that tricks the process into thinking its talking to a real terminal (and you can log it directly to a file). The only problem is, that by using a single pty for stdout and stderr, we can now no longer differentiate between the two. So you might want to try a different pty for each pipe - one for the stdin, one for the stdout, and one for the stderr. While this will work 50% of the time, many processes unfortunately do additional redirection checks that make sure that the output path of the stdout and stderr (/dev/tty000x) are the same. If they are not, there must be redirection, thus they give you the same behaviour as if you had piped the stderr and stdout without a pty. You might think this over-the-top checking for redirection is uncommon, but unfortunately it is actually quite prevalent because a lot of programs re-use other code for checking, like this bit of code found in OSX: http://src.gnu-darwin.org/src/bin/stty/util.c Challenge I think the best way to find a solution is in the form of a challenge. If anyone can run the following script (ideally via Python, but at this point I'll take anything) in such a way that the stdout and stderr is logged separately, AND you managed to fool it into thinking it was executed via a tty, you solve the problem :) #!/usr/bin/python import os import sys if sys.stdout.isatty() and sys.stderr.isatty() and os.ttyname(sys.stdout.fileno()) == os.ttyname(sys.stderr.fileno()): sys.stdout.write("This is a") sys.stderr.write("real tty :)") else: sys.stdout.write("You cant fool me!") sys.stdout.flush() sys.stderr.flush() Note that a solution should really work for any process, not just this code specifically. Overwriting the sys/os module and using LD_PRELOAD is very interesting ways to beat the challenge, but they don't solve the heart of the problem :)
Like this? % ./challenge.py >stdout 2>stderr % cat stdout This is a real tty :) standard output data % cat stderr standard error data Because I cheated a little bit. ;-) % echo $LD_PRELOAD /home/karol/preload.so Like so... % gcc preload.c -shared -o preload.so -fPIC I feel dirty now, but it was fun. :D % cat preload.c #include <stdlib.h> int isatty(int fd) { if(fd == 2 || fd == 1) { return 1; } return 0; } char* ttyname(int fd) { static char* fake_name = "/dev/fake"; if(fd == 2 || fd == 1) { return fake_name; } return NULL; }
Cannot import name _uuid_generate_random in heroku django
I am working on a project which scans user gmail inbox and provides a report. I have deployed it in heroku with following specs: Language: Python 2.7 Framework: Django 1.8 Task scheduler: Celery (Rabbitmq-bigwig for broker url) Now when heroku execute it the celery is not giving me the output. On Heroku push its showing Collectstatic configuration error. I have tried using whitenoise package Also tried executing: heroku run python manage.py collectstatic --dry-run --noinput Still getting the same error. $ heroku run python manage.py collectstatic --noinput gave the following details of the error. File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 303, in execute settings.INSTALLED_APPS File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__ self._setup(name) File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 44, in _setup self._wrapped = Settings(settings_module) File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 92, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/app/.heroku/python/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/app/salesblocker/__init__.py", line 5, in <module> from .celery import app as celery_app File "/app/salesblocker/celery.py", line 5, in <module> from celery import Celery File "/app/.heroku/python/lib/python2.7/site-packages/celery/__init__.py", line 131, in <module> from celery import five # noqa File "/app/.heroku/python/lib/python2.7/site-packages/celery/five.py", line 153, in <module> from kombu.utils.compat import OrderedDict # noqa File "/app/.heroku/python/lib/python2.7/site-packages/kombu/utils/__init__.py", line 19, in <module> from uuid import UUID, uuid4 as _uuid4, _uuid_generate_random ImportError: cannot import name _uuid_generate_random I have also tried to rollback heroku commit to previous working commit and cloned that code but on the next commit(changes:removed a media image from the media folder) its showing the same error again. Thanks in advance
You are coming across this issue, which affects Python 2.7.11 (Kombu is required by Celery). The issue is fixed in Kombu 3.0.30.
How to prevent tensorflow from allocating the totality of a GPU memory?
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each. For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once. The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up. Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?
You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument: # Assume that you have 12GB of GPU memory and want to allocate ~4GB: gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.
Call another function and optionally keep default arguments
I have a function with one optional argument, like this: def funA(x, a, b=1): return a+b*x I want to write a new function that calls funA and also has an optional argument, but if no argument is passed, I want to keep the default in funA. I was thinking something like this: def funB(x, a, b=None): if b: return funA(x, a, b) else: return funA(x, a) Is there a more pythonic way of doing this?
I would replace if b with if b is not None, so that if you pass b=0 (or any other "falsy" value) as argument to funB it will be passed to funA. Apart from that it seems pretty pythonic to me: clear and explicit. (albeit maybe a bit useless, depending on what you're trying to do!) A little more cryptic way that relies on calling funB with the correct keyword arguments (e.g. funB(3, 2, b=4): def funB(x, a, **kwargs): return funA(x, a, **kwargs)