instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Dictionary comprehension with lambda functions gives wrong results
I tried the following code in Python 3.5.1: >>> f = {x: (lambda y: x) for x in range(10)} >>> f[5](3) 9 It's obvious that this should return 5. I don't understand where the other value comes from, and I wasn't able to find anything. It seems like it's something related to reference - it always returns the answer of f[9], which is the last function assigned. What's the error here, and how should this be done so that it works properly?
Python scoping is lexical. A closure will refer to the name and scope of the variable, not the actual object/value of the variable. What happens is that each lambda is capturing the variable x not the value of x. At the end of the loop the variable x is bound to 9, therefore every lambda will refer to this x whose value is 9. Why @ChrisP's answer works: make_func forces the value of x to be evaluated (as it is passed into a function). Thus, the lambda is made with value of x currently and we avoid the above scoping issue. def make_func(x): return lambda y: x f = {x: make_func(x) for x in range(10)}
Spark program gives odd results when ran on standalone cluster
I have this spark program and I'll try to limit it to just the pertinent parts # Split by delimiter , # If the file is in unicode, we need to convert each value to a float in order to be able to # treat it as a number points = sc.textFile(filename).map(lambda line: [float(x) for x in line.split(",")]).persist() # start with K randomly selected points from the dataset # A centroid cannot be an actual data point or else the distance measure between a point and # that centroid will be zero. This leads to an undefined membership value into that centroid. centroids = points.takeSample(False, K, 34) #print centroids # Initialize our new centroids newCentroids = [[] for k in range(K)] tempCentroids = [] for centroid in centroids: tempCentroids.append([centroid[N] + 0.5]) #centroids = sc.broadcast(tempCentroids) convergence = False ncm = NCM() while(not convergence): memberships = points.map(lambda p : (p, getMemberships([p[N]], centroids.value, m))) cmax = memberships.map(lambda (p, mus) : (p, getCMax(mus, centroids.value))) # Memberships T = cmax.map(lambda (p, c) : (p, getMemberships2([p[N]], centroids.value, m, delta, weight1, weight2, weight3, c))) I = cmax.map(lambda (p, c) : (p, getIndeterminateMemberships([p[N]], centroids.value, m, delta, weight1, weight2, weight3, c)[0])) F = cmax.map(lambda (p, c) : (p, getFalseMemberships([p[N]], centroids.value, m, delta, weight1, weight2, weight3, c)[0])) # Components of new centroids wTm = T.map(lambda (x, t) : ('onekey', scalarPow(m, scalarMult(weight1, t)))) #print "wTm = " + str(wTm.collect()) print "at first reduce" sumwTm = wTm.reduceByKey(lambda p1, p2 : addPoints(p1, p2)) #print "sumwTm = " + str(sumwTm.collect()) wTmx = T.map(lambda (x, t) : pointMult([x[N]], scalarPow(m, scalarMult(weight1, t)))) print "adding to cnumerator list" #print wTmx.collect() cnumerator = wTmx.flatMap(lambda p: getListComponents(p)).reduceByKey(lambda p1, p2 : p1 + p2).values() print "collected cnumerator, now printing" #print "cnumerator = " + str(cnumerator.collect()) #print str(sumwTm.collect()) # Calculate the new centroids sumwTmCollection = sumwTm.collect()[0][1] cnumeratorCollection = cnumerator.collect() #print "sumwTmCollection = " + str(sumwTmCollection) #cnumeratorCollection =cnumerator.collectAsMap().get(0).items print "cnumeratorCollection = " + str(cnumeratorCollection) for i in range(len(newCentroids)): newCentroids[i] = scalarMult(1 / sumwTmCollection[i], [cnumeratorCollection[i]]) centroids = newCentroids # Test for convergence convergence = ncm.test([centroids[N]], [newCentroids[N]], epsilon) #convergence = True # Replace our old centroids with the newly found centroids and repeat if convergence not met # Clear out space for a new set of centroids newCentroids = [[] for k in range(K)] This program works pretty well on my local machine, however, it does not behave as expected when run on a standalone cluster. It doesn't necessarily throw an error, but what it does do it give different output than that which I receive when running on my local machine. The cluster and the 3 nodes seem to be working fine. I have a feeling the problem is that I keep updating centroids, which is a python list, and it changes each time through the while-loop. Is it possible that each node may not have the most recent copy of that list? I think so so I tried using a broadcast variable but those can't be updated (read only). I also tried using an accumulator but those are just for accumulations. I also tried to save the python lists as a file on hdfs for each node to have access to, but this didn't work well. Do you think I'm understanding the problem correctly? Is something else likely going on here? How can I get code that works fine on my local machine, but not on a cluster?
Thank you for all of the time and attention to this problem, especially since it sounds like I could have posted more information to make your jobs easier. The problem here is in centroids = points.takeSample(False, K, 34) I didn't realize this, but after a short experiment, this function returns the same output each and every time, despite being what I thought was a random sample. As long as you use the same seed (34 in this case), you will get the same RDD in return. The RDD on my cluster was different for some reason than the one returned to my local machine. In any case, since it was the same RDD each time, my output never changed. The problem with the "random" centroids returned to me is that these particular ones gave rise to something like a saddle point in mathematics, where no convergence of the centroids would be found. This part of the answer is mathematical and a programming one, so I won't mention it further. My real hope at this point is that others are helped by the notion that if you want centroids = points.takeSample(False, K, 34) to produce different samples each time it is called, that you change your seed each time to some random number. I hope this all helps. I've never before spent so much time on a solution to my memory. Thanks again.
Hook the global name lookup in a python interpreter
Here is the thing, I have a proxy holding the reference to a remote module, and I put some of these proxies to the sys.modules such that I can use it just like local modules. But some other objects are put in the __builtin__ module at the remote environment (like a magic variable for convenience of debugging or referencing). I don't want to reference these vars like conn.__builtin__.var, and I have to either replace the local __builtin__ (which seems not working for replace sys.modules['__builtin__'] or to hook the global name finding rules. How? For a module you can just overload a getattr to do this. But in a interactive interpreter like IPython, who is the main module or how to do this? update: As pointed out by @Nizam Mohamed, yes I can get the __main__ module, but still I can't modify the name lookup role of it. I'd like to turn the local environment completely to be the remote one (for a debugging console) UPDATE For now I just iterate all the __builtin__.__dict__ and if there is a name that isn't in the local __builtin__. I add the name to local's __builtin__. But it's not so dynamic compare to a name lookup rule say if I can't find the name in local __builtin__ try the remote one. here is a similar discussion. And this question gives a simulation of module by replace it with a object in sys.modules. But this won't work for __builtin__ name lookup, I've also tried to replace the __builtin__.__getattribute__ with a custom one that will first use the original lookup followed by a custom one when failed. But global name lookup of __builtin__ never called into the __builtin__.__getattribute__ even __builtin__.__getattribute__('name') returns the desired value, __builtin__.name or name never returns one.
Use AST transformation of IPython shell As @asmeurer said, you can write a simple AST transformer to "hook" the variable name lookup. The base class ast.NodeTransformer provide a visit_Name method that you can manipulate. You just need to overload this method to redefine those variables existing in the remote module but not locally. The following module can be used as an IPython extension: testAST.py import ast modName = "undefined" modAttr = [] user_ns = {} class MyTransformer(ast.NodeTransformer): def visit_Name(self, node): if node.id in modAttr and not node.id in user_ns: return self.getName(node) return node def getName(self, NameNode): return ast.Attribute(value=ast.Name(id=modName, ctx=ast.Load()), attr = NameNode.id, ctx = NameNode.ctx) def magic_import(self, line): global modName, modAttr, user_ns modName = str(line) if not self.shell.run_code( compile('import {0}'.format(line), '<string>', 'exec') ): user_ns = self.shell.user_ns modAttr = user_ns[line.strip()].__dict__ self.shell.ast_transformers.append(MyTransformer()) print modName, 'imported' def load_ipython_extension(ip): ip.define_magic('magic_import', magic_import) dummyModule.py robot=" World" Usage: In [1]: %load_ext testAST In [2]: %magic_import dummyModule In [3]: print "Hello" , robot Hello World In [4]: dummyModule.robot_II = "Human" In [5]: print "Hi", robot_II Hi Human The benefit of this method is that any modification to the remote module takes effect immediately because the lookup is done in the language level and no object is copied and cached. One drawback of this method is not being able to handle dynamic lookup. If that's important for you, maybe the python_line_transforms hook is more suitable.
Lowercasing script in Python vs Perl
In Perl, to lowercase a textfile, I could do the following lowercase.perl: #!/usr/bin/env perl use warnings; use strict; binmode(STDIN, ":utf8"); binmode(STDOUT, ":utf8"); while(<STDIN>) { print lc($_); } And on the command line: perl lowercase.perl < infile.txt > lowered.txt In Python, I could do with lowercase.py: #!/usr/bin/env python import io import sys with io.open(sys.argv[1], 'r', 'utf8') as fin: with io.open(sys.argv[2], 'r', 'utf8') as fout: fout.write(fin.read().lower()) And on the command line: python lowercase.py infile.txt lowered.txt Is the Perl lowercase.perl different from the Python lowercase.py? Does it stream the input and lowercase it as it outputs? Or does it read the whole file like the Python's lowercase.py? Instead of reading in a whole file, is there a way to stream the input into Python and output the lowered case byte by byte or char by char? Is there a way to control the command-line syntax such that it follows the Perl STDIN and STDOUT? E.g. python lowercase.py < infile.txt > lowered.txt?
Python 3.x equivalent for your Perl code may look as follows: #!/usr/bin/env python3.4 import sys for line in sys.stdin: print(line[:-1].lower(), file=sys.stdout) It reads stdin line-by-line and could be used in shell pipeline
Appropriate Deep Learning Structure for multi-class classification
I have the following data feat_1 feat_2 ... feat_n label gene_1 100.33 10.2 ... 90.23 great gene_2 13.32 87.9 ... 77.18 soso .... gene_m 213.32 63.2 ... 12.23 quitegood The size of M is large ~30K rows, and N is much smaller ~10 columns. My question is what is the appropriate Deep Learning structure to learn and test the data like above. At the end of the day, the user will give a vector of genes with expression. gene_1 989.00 gene_2 77.10 ... gene_N 100.10 And the system will label which label does each gene apply e.g. great or soso, etc... By structure I mean one of these: Convolutional Neural Network (CNN) Autoencoder Deep Belief Network (DBN) Restricted Boltzman Machine
To expand a little on @sung-kim 's comment: CNN's are used primarily for problems in computer imaging, such as classifying images. They are modelled on animals visual cortex, they basically have a connection network such that there are tiles of features which have some overlap. Typically they require a lot of data, more than 30k examples. Autoencoder's are used for feature generation and dimensionality reduction. They start with lots of neurons on each layer, then this number is reduced, and then increased again. Each object is trained on itself. This results in the middle layers (low number of neurons) providing a meaningful projection of the feature space in a low dimension. While I don't know much about DBN's they appear to be a supervised extension of the Autoencoder. Lots of parameters to train. Again I don't know much about Boltzmann machines, but they aren't widely used for this sort of problem (to my knowledge) As with all modelling problems though, I would suggest starting from the most basic model to look for signal. Perhaps a good place to start is Logistic Regression before you worry about deep learning. If you have got to the point where you want to try deep learning, for whatever reasons. Then for this type of data a basic feed-forward network is the best place to start. In terms of deep-learning, 30k data points is not a large number, so always best start out with a small network (1-3 hidden layers, 5-10 neurons) and then get bigger. Make sure you have a decent validation set when performing parameter optimisation though. If your a fan of the scikit-learn API, I suggest that Keras is a good place to start One further comment, you will want to use a OneHotEncoder on your class labels before you do any training. EDIT I see from the bounty and the comments that you want to see a bit more about how these networks work. Please see the example of how to build a feed-forward model and do some simple parameter optisation import numpy as np from sklearn import preprocessing from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout # Create some random data np.random.seed(42) X = np.random.random((10, 50)) # Similar labels labels = ['good', 'bad', 'soso', 'amazeballs', 'good'] labels += labels labels = np.array(labels) np.random.shuffle(labels) # Change the labels to the required format numericalLabels = preprocessing.LabelEncoder().fit_transform(labels) numericalLabels = numericalLabels.reshape(-1, 1) y = preprocessing.OneHotEncoder(sparse=False).fit_transform(numericalLabels) # Simple Keras model builder def buildModel(nFeatures, nClasses, nLayers=3, nNeurons=10, dropout=0.2): model = Sequential() model.add(Dense(nNeurons, input_dim=nFeatures)) model.add(Activation('sigmoid')) model.add(Dropout(dropout)) for i in xrange(nLayers-1): model.add(Dense(nNeurons)) model.add(Activation('sigmoid')) model.add(Dropout(dropout)) model.add(Dense(nClasses)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd') return model # Do an exhaustive search over a given parameter space for nLayers in xrange(2, 4): for nNeurons in xrange(5, 8): model = buildModel(X.shape[1], y.shape[1], nLayers, nNeurons) modelHist = model.fit(X, y, batch_size=32, nb_epoch=10, validation_split=0.3, shuffle=True, verbose=0) minLoss = min(modelHist.history['val_loss']) epochNum = modelHist.history['val_loss'].index(minLoss) print '{0} layers, {1} neurons best validation at'.format(nLayers, nNeurons), print 'epoch {0} loss = {1:.2f}'.format(epochNum, minLoss) Which outputs 2 layers, 5 neurons best validation at epoch 0 loss = 1.18 2 layers, 6 neurons best validation at epoch 0 loss = 1.21 2 layers, 7 neurons best validation at epoch 8 loss = 1.49 3 layers, 5 neurons best validation at epoch 9 loss = 1.83 3 layers, 6 neurons best validation at epoch 9 loss = 1.91 3 layers, 7 neurons best validation at epoch 9 loss = 1.65
How do I identify sequences of values in a boolean array?
I have a long boolean array: bool_array = [ True, True, True, True, True, False, False, False, False, False, True, True, True, False, False, True, True, True, True, False, False, False, False, False, False, False ] I need to figure out where the values flips, i.e., the addresses where sequences of True and False begin. In this particular case, I would want to get index = [0, 5, 10, 13, 15, 19, 26] Is there an easy way to do without manually looping to check every ith element with the (i+1)th?
As a more efficient approach for large datasets, in python 3.X you can use accumulate and groupby function from itertools module. >>> from itertools import accumulate, groupby >>> [0] + list(accumulate(sum(1 for _ in g) for _,g in groupby(bool_array))) [0, 5, 10, 13, 15, 19, 26] The logic behind the code: This code, categorizes the sequential duplicate items using groupby() function, then loops over the iterator returned by groupby() which is contains pairs of keys (that we escaped it using under line instead of a throw away variable) and these categorized iterators. >>> [list(g) for _, g in groupby(bool_array)] [[True, True, True, True, True], [False, False, False, False, False], [True, True, True], [False, False], [True, True, True, True], [False, False, False, False, False, False, False]] So all we need is calculating the length of these iterators and sum each length with its previous length, in order to get the index of first item which is exactly where the item is changed, that is exactly what that accumulate() function is for.
Get params validation on viewsets.ModelViewSet
I am new to django and building a REST API using django-rest-framework. I have written some code to check whether the user has supplied some parameters or not.But that is very ugly with lot of if conditions, so i want to refactor it.Below is the code that i have written please suggest how to refactor it. I am looking for some django based validations. class AssetsViewSet(viewsets.ModelViewSet): queryset = Assets.objects.using("gpr").all() def create(self, request): assets = [] farming_details = {} bluenumberid = request.data.get('bluenumberid', None) if not bluenumberid: return Response({'error': 'BlueNumber is required.'}) actorid = request.data.get('actorid', None) if not actorid: return Response({'error': 'Actorid is required.'}) asset_details = request.data.get('asset_details', None) if not asset_details: return Response({'error': 'AssetDetails is required.'}) for asset_detail in asset_details: location = asset_detail.get('location', None) if not location: return Response({'error': 'location details is required.'}) assettype = asset_detail.get('type', None) if not assettype: return Response({'error': 'assettype is required.'}) asset_relationship = asset_detail.get('asset_relationship', None) if not asset_relationship: return Response({'error': 'asset_relationship is required.'}) subdivision_code = location.get('subdivision_code', None) if not subdivision_code: return Response({'error': 'subdivision_code is required.'}) country_code = location.get('country_code', None) if not country_code: return Response({'error': 'country_code is required.'}) locationtype = location.get('locationtype', None) if not locationtype: return Response({'error': 'locationtype is required.'}) latitude = location.get('latitude', None) if not latitude: return Response({'error': 'latitude is required.'}) longitude = location.get('longitude', None) if not longitude: return Response({'error': 'longitude is required.'}) try: country_instance = Countries.objects.using('gpr').get(countrycode=country_code) except: return Response({'error': 'Unable to find country with countrycode ' + str(country_code)}) try: subdivision_instance = NationalSubdivisions.objects.using('gpr').get(subdivisioncode=subdivision_code, countrycode=country_code) except: return Response({'error': 'Unable to find subdivision with countrycode ' + str(country_code) + ' and' + ' subdivisioncode ' + str(subdivision_code)}) kwargs = {} kwargs['pobox'] = location.get('pobox', '') kwargs['sublocation'] = location.get('sublocation', '') kwargs['streetaddressone'] = location.get('streetaddressone', '') kwargs['streetaddresstwo'] = location.get('streetaddresstwo', '') kwargs['streetaddressthree'] = location.get('streetaddressthree', '') kwargs['city'] = location.get('city', '') kwargs['postalcode'] = location.get('postalcode', '') cursor = connections['gpr'].cursor() cursor.execute("Select uuid() as uuid") u = cursor.fetchall() uuid = u[0][0].replace("-", "") kwargs['locationid'] = uuid # l.refresh_from_db() try: Locations.objects.using('gpr').create_location(locationtype=locationtype, latitude=latitude, longitude=longitude, countrycode=country_instance, subdivisioncode = subdivision_instance, **kwargs) except (TypeError, ValueError): return Response({'error': 'Error while saving location'}) try: location_entry = Locations.objects.using('gpr').get(locationid=uuid) except: return Response({'error': 'Unable to find location with locationid ' + str(uuid)}) asset_entry = Assets.objects.using('gpr').create(locationid=location_entry, assettype=assettype) asset_entry = Assets.objects.using('gpr').filter(locationid=location_entry, assettype=assettype).latest('assetinserted') farming_details[asset_entry.assetid] = [] try: actor = Actors.objects.using('gpr').get(actorid = actorid) except: return Response({'error': 'Unable to find actor with actorid ' + str(actorid)}) assetrelationship = AssetRelationships.objects.using('gpr').create(assetid= asset_entry, actorid= actor,assetrelationship=asset_relationship) assets.append(asset_entry) if assettype=="Farm or pasture land": hectares = asset_detail.get('hectares', None) if hectares is None: return Response({'error': 'hectares must be a decimal number'}) try: farmingasset = FarmingAssets.objects.using('gpr').create(assetid=asset_entry, hectares=hectares) except ValidationError: return Response({'error': 'hectares must be decimal value.'}) farmingasset = FarmingAssets.objects.using('gpr').filter(assetid=asset_entry, hectares=hectares).last() for type_detail in asset_detail.get('type_details', []): crop = type_detail.get('crop', '') hectare = type_detail.get('hectare', '') if crop != '' and hectare != '': try: h3code = ProductCodes.objects.using('gpr').get(h3code=crop) except: return Response({'error': 'Unable to find ProductCode with h3code' + str(crop)}) try: farming = Farming.objects.using('gpr').create(assetid=farmingasset, h3code=h3code, annualyield=hectare) farming_details[asset_entry.assetid].append(farming.farmingid) except Exception as e: return Response({'error': e}) else: return Response({'error': 'crop with hectare is required.'}) i = 0 data = {} for asset in assets: if farming_details[asset.assetid]: data[i] = {"assetid": asset.assetid, "assetbluenumber": asset.assetuniversalid, "farming_ids": farming_details[asset.assetid]} else: data[i] = {"assetid": asset.assetid, "assetbluenumber": asset.assetuniversalid} i+=1 return Response(data) Asset Model class Assets(models.Model): assetid = models.CharField(db_column='AssetID', primary_key=True, max_length=255) # Field name made lowercase. assetname = models.CharField(db_column='AssetName', max_length=255, blank=True, null=True) # Field name made lowercase. locationid = models.ForeignKey('Locations', models.DO_NOTHING, db_column='LocationID') # Field name made lowercase. assetuniversalid = models.CharField(db_column='AssetBluenumber', unique=True, blank=True, null=True, max_length=255) # Field name made lowercase. assettype = models.CharField(db_column='AssetType', max_length=45, blank=True, null=True) # Field name made lowercase. assetinserted = models.DateTimeField(db_column='AssetInserted', blank=True, null=True, auto_now_add=True) # Field name made lowercase. assetupdated = models.DateTimeField(db_column='AssetUpdated', blank=True, null=True, auto_now=True) # Field name made lowercase.
You can make serializers, they have a very easy way to validate your data. As in your case all the fields seem to be required it becomes even easier. Create a file on you api app like: serializers.py #Import Serializers lib from rest_framework import serializers #Import your models here (You can put more than one serializer in one file) from assets.model import Assets #Now make you serializer class class AssetsSerializer(serializers.ModelSerializer): class Meta: model = Profile fields = '__all__' #This last line will put all the fields on you serializer #but you can also especify only some fields like: #fields = ('assetid', 'assetname') On you view you can use your serializer(s) class to validate you data. views.py #Serializers from assets.serializers import AssetsSerializer #Libraries you can use from django.http import Http404 from rest_framework.views import APIView from rest_framework.response import Response from rest_framework import status class AssetsViewSet(viewsets.ModelViewSet): queryset = Assets.objects.using("gpr").all() def create(self, request): assets = [] farming_details = {} #Set your serializer serializer = AssetsSerializer(data=request.data) if serializer.is_valid(): #MAGIC HAPPENS HERE #... Here you do the routine you do when the data is valid #You can use the serializer as an object of you Assets Model #Save it serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) i took this all from the documentation. You can learn a lot doing the tutorial from the official site. I hope it helps.
In Python, when are two objects the same?
It seems that 2 is 2 and 3 is 3 will always be true in python, and in general, any reference to an integer is the same as any other reference to the same integer. The same happens to None (i.e., None is None). I know that this does not happen to user-defined types, or mutable types. But it sometimes fails on immutable types too: >>> () is () True >>> (2,) is (2,) False That is: two independent constructions of the empty tuple yield references to the same object in memory, but two independent constructions of identical one-(immutable-)element tuples end up creating two identical objects. I tested, and frozensets work in a manner similar to tuples. What determines if an object will be duplicated in memory or will have a single instance with lots of references? Does it depend on whether the object is "atomic" in some sense? Does it vary according to implementation?
Python has some types that it guarantees will only have one instance. Examples of these instances are None, NotImplemented, and Ellipsis. These are (by definition) singletons and so things like None is None are guaranteed to return True because there is no way to create a new instance of NoneType. It also supplies a few doubletons 1 True, False 2 -- All references to True point to the same object. Again, this is because there is no way to create a new instance of bool. The above things are all guaranteed by the python language. However, as you have noticed, there are some types (all immutable) that store some instances for reuse. This is allowed by the language, but different implementations may choose to use this allowance or not -- depending on their optimization strategies. Some examples that fall into this category are small integers (-5 -> 255), the empty tuple and empty frozenset. Finally, Cpython interns certain immutable objects during parsing... e.g. if you run the following script with Cpython, you'll see that it returns True: def foo(): return (2,) if __name__ == '__main__': print foo() is foo() This seems really odd. The trick that Cpython is playing is that whenever it constructs the function foo, it sees a tuple-literal that contains other simple (immutable) literals. Rather than create this tuple (or it's equivalents) over and over, python just creates it once. There's no danger of that object being changed since the whole deal is immutable. This can be a big win for performance where the same tight loop is called over and over. Small strings are interned as well. The real win here is in dictionary lookups. Python can do a (blazingly fast) pointer compare and then fall back on slower string comparisons when checking hash collisions. Since so much of python is built on dictionary lookups, this can be a big optimization for the language as a whole. 1I might have just made up that word ... But hopefully you get the idea... 2Under normal circumstances, you don't need do check if the object is a reference to True -- Usually you just care if the object is "truthy" -- e.g. if if some_instance: ... will execute the branch. But, I put that in here just for completeness. Note that is can be used to compare things that aren't singletons. One common use is to create a sentinel value: sentinel = object() item = next(iterable, sentinel) if items is sentinel: # iterable exhausted. Or: _sentinel = object() def function(a, b, none_is_ok_value_here=_sentinel): if none_is_ok_value_here is sentinel: # Treat the function as if `none_is_ok_value_here` was not provided. The moral of this story is to always say what you mean. If you want to check if a value is another value, then use the is operator. If you want to check if a value is equal to another value (but possibly distinct), then use ==. For more details on the difference between is and == (and when to use which), consult one of the following posts: Is there a difference between `==` and `is` in Python? Python None comparison: should I use "is" or ==? Addendum We've talked about these CPython implementation details and we've claimed that they're optimizations. It'd be nice to try to measure just what we get from all this optimizing (other than a little added confusion when working with the is operator). String "interning" and dictionary lookups. Here's a small script that you can run to see how much faster dictionary lookups are if you use the same string to look up the value instead of a different string. Note, I use the term "interned" in the variable names -- These values aren't necessarily interned (though they could be). I'm just using that to indicate that the "interned" string is the string in the dictionary. import timeit interned = 'foo' not_interned = (interned + ' ').strip() assert interned is not not_interned d = {interned: 'bar'} print('Timings for short strings') number = 100000000 print(timeit.timeit( 'd[interned]', setup='from __main__ import interned, d', number=number)) print(timeit.timeit( 'd[not_interned]', setup='from __main__ import not_interned, d', number=number)) #################################################### interned_long = interned * 100 not_interned_long = (interned_long + ' ').strip() d[interned_long] = 'baz' assert interned_long is not not_interned_long print('Timings for long strings') print(timeit.timeit( 'd[interned_long]', setup='from __main__ import interned_long, d', number=number)) print(timeit.timeit( 'd[not_interned_long]', setup='from __main__ import not_interned_long, d', number=number)) The exact values here shouldn't matter too much, but on my computer, the short strings show about 1 part in 7 faster. The long strings are almost 2x faster (because the string comparison takes longer if the string has more characters to compare). The differences aren't quite as striking on python3.x, but they're still definitely there. Tuple "interning" Here's a small script you can play around with: import timeit def foo_tuple(): return (2, 3, 4) def foo_list(): return [2, 3, 4] assert foo_tuple() is foo_tuple() number = 10000000 t_interned_tuple = timeit.timeit('foo_tuple()', setup='from __main__ import foo_tuple', number=number) t_list = (timeit.timeit('foo_list()', setup='from __main__ import foo_list', number=number)) print(t_interned_tuple) print(t_list) print(t_interned_tuple / t_list) print('*' * 80) def tuple_creation(x): return (x,) def list_creation(x): return [x] t_create_tuple = timeit.timeit('tuple_creation(2)', setup='from __main__ import tuple_creation', number=number) t_create_list = timeit.timeit('list_creation(2)', setup='from __main__ import list_creation', number=number) print(t_create_tuple) print(t_create_list) print(t_create_tuple / t_create_list) This one is a bit trickier to time (and I'm happy to take any better ideas how to time it in comments). The gist of this is that on average (and on my computer), a tuple takes about 60% as long to create as a list does. However, foo_tuple() takes on average about 40% the time that foo_list() takes. That shows that we really do gain a little bit of a speedup from these interns. The time savings seem to increase as the tuple gets larger (creating a longer list takes longer -- The tuple "creation" takes constant time since it was already created). Also note that I've called this "interning". It actually isn't (at least not in the same sense the strings are interned). We can see the difference in this simple script: def foo_tuple(): return (2,) def bar_tuple(): return (2,) def foo_string(): return 'foo' def bar_string(): return 'foo' print(foo_tuple() is foo_tuple()) # True print(foo_tuple() is bar_tuple()) # False print(foo_string() is bar_string()) # True We see that the strings are really "interned" -- Different invocations using the same literal notation return the same object. The tuple "interning" seems to be specific to a single line.
What does Python mean by printing "[...]" for an object reference?
I'm printing a value of a what I thought was a list, but the output that I get is: [...] What does this represent? How do I test for it? I've tried: myVar.__repr__() != '[...]' and myVar.__repr_() != Ellipsis but no dice... Here's a cutdown of the code that's giving the issue: def buildPaths(graph, start, end, path=[], totalPaths=[]): """ returns list of all possible paths from start node to the end node """ path = path + [start] if start == end: return path for nextNode in graph.childrenOf(start): if nextNode not in path: newPath = buildPaths(graph, nextNode, end, path, totalPaths) if newPath != []: # test totalPaths.append(newPath) return totalPaths totalPaths contains a LOT of [...] supposedly recursive lists, but I can't see why. I've altered the test at #test to prevent this. I've also tried: def buildPaths(graph, thisNode, end, path=[], totalPaths=None): """ returns list of all possible paths from start node to the end node """ path = path + [thisNode] if thisNode == end: return path for nextNode in graph.childrenOf(thisNode): if nextNode not in path: newPath = buildPaths(graph, nextNode, end, path, totalPaths) if newPath != None: if totalPaths == None: totalPaths = [newPath] else: totalPaths.append(newPath) return totalPaths in order to explicitly return None for empty paths.
It represents an infinite loop within the structure. An example: In [1]: l = [1, 2] In [2]: l[0] = l In [3]: l Out[3]: [[...], 2] l's first item is itself. It's a recursive reference, and so python can't reasonably display its contents. Instead it shows [...]
Pairwise circular Python 'for' loop
Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first. So for instance, if I have the list [1, 2, 3], I would like to get the following pairs: 1 - 2 2 - 3 3 - 1
A Pythonic way to access a list pairwise is: zip(L, L[1:]). To connect the last item to the first one: >>> L = [1, 2, 3] >>> zip(L, L[1:] + L[:1]) [(1, 2), (2, 3), (3, 1)]
How int() object using "==" operator without __eq__() method in python2?
after I read the "Fluent python" book, I understand how == operator works with python objects, using __eq__() method. But how it works with int instances in python2? >>> a = 1 >>> b = 1 >>> a == b True >>> a.__eq__(b) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute '__eq__' in python3 all a.__eq__(b) returns True
Python prefers to use rich comparison functions (__eq__, __lt__, __neq__, etc.), but if those don't exist, it falls back to using a single comparison function (__cmp__, removed in Python 3): These are the so-called “rich comparison” methods, and are called for comparison operators in preference to __cmp__() below. The Python 2 integer type doesn't implement a rich comparison function: PyTypeObject PyInt_Type = { ... (cmpfunc)int_compare, /* tp_compare */ ... 0, /* tp_richcompare */ In Python 3, the integer type (now a long) implements only a rich comparison function, since Python 3 dropped support for __cmp__: PyTypeObject PyLong_Type = { ... long_richcompare, /* tp_richcompare */ This is why (123).__eq__ doesn't exist. Instead, Python 2 falls back to (123).__cmp__ when testing the equality of two integers: >>> (1).__eq__(2) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute '__eq__' >>> (1).__cmp__(2) -1
Adding keys to defaultdict(dict)
I have a defaultdict that looks like this: my_dict = defaultdict(dict) which will print out: defaultdict(<class 'dict'>, {}) I also have two lists, which look like this: list1 = ["W", "IY", "W"] list2 = ["w", "ee", "w"] I would like to create a default dict which looks like this: defaultdict(<class 'dict'>, {'W': {'w': 2}, 'IY': {'ee': 1}} which has list1 within a dictionary as keys, with the keys as the next list with a separate dictionary, counting the instances of list2 as values. So far I have this: from collections import defaultdict d = defaultdict(dict) list1 = ["W", "IY", "W"] list2 = ["w", "ee", "w"] for char in list1: d[char] += 1 I know that this is not correct, as the defaultdict(dict) cannot be treated this way. Is there a way a I could do this? Any help would be greatly appreciated :)
Here is a solution using collections.Counter. import collections d = collections.defaultdict(collections.Counter) list1 = ["O", "TH", "O", "O"] list2 = ["o", "th", "o", "o1"] for key, value in zip(list1, list2): d[key].update([value]) >>> d defaultdict(<class 'collections.Counter'>, {'TH': Counter({'th': 1}), 'O': Counter({'o': 2, 'o1': 1})}) >>> While this doesn't strictly follow your requirements, collections.Counter inherits from dict so it has all of dict's attributes
Ansible roles/packages - Ansible Galaxy - error on instalation MAC OSX
Im trying to install ansible-galaxy roles on Mac OS X El Capitan via CLI $ ansible-galaxy install -r requirements.yml I am getting this error: ERROR! Unexpected Exception: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3')) the full traceback was: Traceback (most recent call last): File "/usr/local/bin/ansible-galaxy", line 73, in <module> mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass) File "/Library/Python/2.7/site-packages/ansible/cli/galaxy.py", line 38, in <module> from ansible.galaxy.role import GalaxyRole File "/Library/Python/2.7/site-packages/ansible/galaxy/role.py", line 35, in <module> from ansible.playbook.role.requirement import RoleRequirement File "/Library/Python/2.7/site-packages/ansible/playbook/__init__.py", line 25, in <module> from ansible.playbook.play import Play File "/Library/Python/2.7/site-packages/ansible/playbook/play.py", line 27, in <module> from ansible.playbook.base import Base File "/Library/Python/2.7/site-packages/ansible/playbook/base.py", line 35, in <module> from ansible.parsing.dataloader import DataLoader File "/Library/Python/2.7/site-packages/ansible/parsing/dataloader.py", line 32, in <module> from ansible.parsing.vault import VaultLib File "/Library/Python/2.7/site-packages/ansible/parsing/vault/__init__.py", line 67, in <module> from cryptography.hazmat.primitives.hashes import SHA256 as c_SHA256 File "/Library/Python/2.7/site-packages/cryptography/hazmat/primitives/hashes.py", line 15, in <module> from cryptography.hazmat.backends.interfaces import HashBackend File "/Library/Python/2.7/site-packages/cryptography/hazmat/backends/__init__.py", line 7, in <module> import pkg_resources File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2797, in <module> parse_requirements(__requires__), Environment() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 580, in resolve raise VersionConflict(dist,req) # XXX put more info here VersionConflict: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3')) Any thoughts?
Run the following to upgrade setuptools under the python user: pip install --upgrade setuptools --user python For some reason, the way things are installed inside OS X (and in my case, under CentOS 7 inside a Docker container), the setuptools package doesn't get installed correctly under the right user.
How to crop biggest rectangle out of an image
I have a few images of pages on a table. I would like to crop the pages out of the image. Generally, the page will be the biggest rectangle in the image, however, all four sides of the rectangle might not be visible in some cases. I am doing the following but not getting desired results: import cv2 import numpy as np im = cv2.imread('images/img5.jpg') gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) ret,thresh = cv2.threshold(gray,127,255,0) _,contours,_ = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) areas = [cv2.contourArea(c) for c in contours] max_index = np.argmax(areas) cnt=contours[max_index] x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2) cv2.imshow("Show",im) cv2.imwrite("images/img5_rect.jpg", im) cv2.waitKey(0) Below are a few examples: 1st Example: I can find the rectangle in this image , however, would like if the remaining part of the wood can be cropped out as well. 2nd Example: Not finding the correct dimensions of the rectangle in this image. 3rd Example: Not able to find the correct dimensions in this image either. 4th Example: Same with this as well.
As I have previously done something similar, I have experienced with hough transforms, but they were much harder to get right for my case than using contours. I have the following suggestions to help you get started: Generally paper (edges, at least) is white, so you may have better luck by going to a colorspace like YUV which better separates luminosity: image_yuv = cv2.cvtColor(image,cv2.COLOR_BGR2YUV) image_y = np.zeros(image_yuv.shape[0:2],np.uint8) image_y[:,:] = image_yuv[:,:,0] The text on the paper is a problem. Use a blurring effect, to (hopefully) remove these high frequency noises. You may also use morphological operations like dilation as well. image_blurred = cv2.GaussianBlur(image_y,(3,3),0) You may try to apply a canny edge-detector, rather than a simple threshold. Not necessarily, but may help you: edges = cv2.Canny(image_blurred,100,300,apertureSize = 3) Then find the contours. In my case I only used the extreme outer contours. You may use CHAIN_APPROX_SIMPLE flag to compress the contour contours,hierarchy = cv2.findContours(edges,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) Now you should have a bunch of contours. Time to find the right ones. For each contour cnt, first find the convex hull, then use approaxPolyDP to simplify the contour as much as possible. hull = cv2.convexHull(cnt) simplified_cnt = cv2.approxPolyDP(hull,0.001*cv2.arcLength(hull,True),True) Now we should use this simplified contour to find the enclosing quadrilateral. You may experiment with lots of rules you come up with. The simplest method is picking the four longest longest segments of the contour, and then create the enclosing quadrilateral by intersecting these four lines. Based on your case, you can find these lines based on the contrast the line makes, the angle they make and similar things. Now you have a bunch of quadrilaterals. You can now perform a two step method to find your required quadrilateral. First you remove those ones that are probably wrong. For example one angle of the quadrilateral is more than 175 degrees. Then you can pick the one with the biggest area as the final result. You can see the orange contour as one of the results I got at this point: The final step after finding (hopefully) the right quadrilateral, is transforming back to a rectangle. For this you can use findHomography to come up with a transformation matrix. (H,mask) = cv2.findHomography(cnt.astype('single'),np.array([[[0., 0.]],[[2150., 0.]],[[2150., 2800.]],[[0.,2800.]]],dtype=np.single)) The numbers assume projecting to letter paper. You may come up with better and more clever numbers to use. You also need to reorder the contour points to match the order of coordinates of the letter paper. Then you call warpPerspective to create the final image: final_image = cv2.warpPerspective(image,H,(2150, 2800)) This warping should result in something like the following (from my results before): I hope this helps you to find an appropriate approach in your case.
Scrapy: non-blocking pause
I have a problem. I need to stop the execution of a function for a while, but not stop the implementation of parsing as a whole. That is, I need a non-blocking pause. It's looks like: class ScrapySpider(Spider): name = 'live_function' def start_requests(self): yield Request('some url', callback=self.non_stop_function) def non_stop_function(self, response): for url in ['url1', 'url2', 'url3', 'more urls']: yield Request(url, callback=self.second_parse_function) # Here I need some function for sleep only this function like time.sleep(10) yield Request('some url', callback=self.non_stop_function) # Call itself def second_parse_function(self, response): pass Function non_stop_function needs to be stopped for a while, but it should not block the rest of the output. If I insert time.sleep() - it will stop the whole parser, but I don't need it. Is it possible to stop one function using twisted or something else? Reason: I need to create a non-blocking function that will parse the page of the website every n seconds. There she will get urls and fill for 10 seconds. URLs that have been obtained will continue to work, but the main feature needs to sleep. UPDATE: Thanks to TkTech and viach. One answer helped me to understand how to make a pending Request, and the second is how to activate it. Both answers complement each other and I made an excellent non-blocking pause for Scrapy: def call_after_pause(self, response): d = Deferred() reactor.callLater(10.0, d.callback, Request( 'https://example.com/', callback=self.non_stop_function, dont_filter=True)) return d And use this function for my request: yield Request('https://example.com/', callback=self.call_after_pause, dont_filter=True)
If you're attempting to use this for rate limiting, you probably just want to use DOWNLOAD_DELAY instead. Scrapy is just a framework on top of Twisted. For the most part, you can treat it the same as any other twisted app. Instead of calling sleep, just return the next request to make and tell twisted to wait a bit. Ex: from twisted.internet import reactor, defer def non_stop_function(self, response) d = defer.Deferred() reactor.callLater(10.0, d.callback, Request( 'some url', callback=self.non_stop_function )) return d
Getting PyCharm to recognize python on the windows linux subsystem (bash on windows)
While running Linux versions of python, pip etc. "natively" on windows is amazing, I'd like to do so using a proper IDE. Since SSHD compatibility has not been implemented yet, I'm trying get PyCharm to recognize Linux python as a local interpreter. After installing the Windows Linux subsystem, typing bash -c python from the windows command line will drop you into a python shell. bash -c "echo \"print 'hello world'\" | python" works as well, producing "hellow world" as output in the windows shell! I'm trying to wrap this up as a .bat file and present it to PyCharm as a local interpreter, i.e. python.bat: C:\Windows\System32\bash.exe -c "echo %1 | python" But I keep getting "the sdk seems invalid" for any variation I try. Since I'm not sure exactly what PyCharm is doing to "validate" the SDK, this is hard to overcome. Any ideas?
Well, I've managed to produce an ugly working hack. You'll have to install python-setuptools and pip manually under the Linux subsystem. Be sure to use the pip version provided by PyCharm, you'll find it at a path similar to: C:\Program Files (x86)\JetBrains\PyCharm 2016.1.2\helpers\pip-7.1.0.tar.gz Then setup the following script as "python.bat" under "c:\Python" and point PyCharm to it as an interpreter: @echo off @setlocal enableextensions enabledelayedexpansion :: Requiers pip and setuptools to already be installed on linux subsystem Set "Pattern= " Set "Replace=\ " Set "cdrive=C:" Set "linpath=/mnt/c" :: Iterate over arguments, convert paths to linux format and concatinate set argCount=0 for %%x in (%*) do ( set /A argCount+=1 set arg=%%x :: Backward slash to forward slash SET arg=!arg:\=/! :: C drive to /mnt/c/ - default linux subsystem mount point SET arg=!arg:%cdrive%=%linpath%! :: Space to escaped space SET arg=!arg:%Pattern%=%Replace%! :: Parethesis to escaped parenteses SET arg=!arg:^(=\^(! SET arg=!arg:^)=\^)%! :: Deqoute voodoo via http://ss64.com/nt/syntax-dequote.html SET arg=###!arg!### SET arg=!arg:"###=! SET arg=!arg:###"=! SET arg=!arg:###=! if "!args!"=="" ( set args=!arg! ) else ( set args=!args! !arg! ) ) :: Dump it to the interpreter :: Output is piped inside the Linux subsys, as windows piping for bash seems broken START "Terrible hack to avoid pipe error" /W /MIN C:\Windows\System32\bash.exe -c "python !args! > /mnt/c/Python/test" :: Output resulr from piped file type c:\Python\test :: echo !args! EXIT /B > NUL Forgive the terrible coding style, as I've never really developed windows batch files before. You may have to tweak the directory structure to match your system. Also note that the output of any python script called by Python.bat is piped to a temp file under the linux subsystem, then typed back out under windows. For some reason, piping the output of bash.exe via windows causes errors. Hope this helps. UPDATE: Wrapped the call to "bash" with "START" in order to avoid terrible pipe handling errors (c.f. https://wpdev.uservoice.com/forums/266908-command-prompt-console-bash-on-ubuntu-on-windo/suggestions/13425768-allow-windows-programs-to-spawn-bash)
How to tell if a single line of python is syntactically valid?
It is very similar to this: How to tell if a string contains valid Python code The only difference being instead of the entire program being given altogether, I am interested in a single line of code at a time. Formally, we say a line of python is "syntactically valid" if there exists any syntactically valid python program that uses that particular line. For instance, I would like to identify these as syntactically valid lines: for i in range(10): x = 1 Because one can use these lines in some syntactically valid python programs. I would like to identify these lines as syntactically invalid lines: for j in range(10 in range(10( x =++-+ 1+- Because no syntactically correct python programs could ever use these lines The check does not need to be too strict, it just need to be good enough to filter out obviously bogus statements (like the ones shown above). The line is given as a string, of course.
This uses codeop.compile_command to attempt to compile the code. This is the same logic that the code module does to determine whether to ask for another line or immediately fail with a syntax error. import codeop def is_valid_code(line): try: codeop.compile_command(line) except SyntaxError: return False else: return True It can be used as follows: >>> is_valid_code('for i in range(10):') True >>> is_valid_code('') True >>> is_valid_code('x = 1') True >>> is_valid_code('for j in range(10 in range(10(') True >>> is_valid_code('x = ++-+ 1+-') False I'm sure at this point, you're saying "what gives? for j in range(10 in range(10( was supposed to be invalid!" The problem with this line is that 10() is technically syntactically valid, at least according to the Python interpreter. In the REPL, you get this: >>> 10() Traceback (most recent call last): File "<pyshell#22>", line 1, in <module> 10() TypeError: 'int' object is not callable Notice how this is a TypeError, not a SyntaxError. ast.parse says it is valid as well, and just treats it as a call with the function being an ast.Num. These kinds of things can't easily be caught until they actually run. If some kind of monster managed to modify the value of the cached 10 value (which would technically be possible), you might be able to do 10(). It's still allowed by the syntax. What about the unbalanced parentheses? This fits the same bill as for i in range(10):. This line is invalid on its own, but may be the first line in a multi-line expression. For example, see the following: >>> is_valid_code('if x ==') False >>> is_valid_code('if (x ==') True The second line is True because the expression could continue like this: if (x == 3): print('x is 3!') and the expression would be complete. In fact, codeop.compile_command distinguishes between these different situations by returning a code object if it's a valid self-contained line, None if the line is expected to continue for a full expression, and throwing a SyntaxError on an invalid line. However, you can also get into a much more complicated problem than initially stated. For example, consider the line ). If it's the start of the module, or the previous line is {, then it's invalid. However, if the previous line is (1,2,, it's completely valid. The solution given here will work if you only work forward, and append previous lines as context, which is what the code module does for an interactive session. Creating something that can always accurately identify whether a single line could possibly exist in a Python file without considering surrounding lines is going to be extremely difficult, as the Python grammar interacts with newlines in non-trivial ways. This answer responds with whether a given line could be at the beginning of a module and continue on to the next line without failing. It would be better to identify what the purpose of recognizing single lines is and solve that problem in a different way than trying to solve this for every case.
Python dictionary doesn't have all the keys assigned, or items
I created the following dictionary exDict = {True: 0, False: 1, 1: 'a', 2: 'b'} and when I print exDict.keys(), well, it gives me a generator. Ok, so I coerce it to a list, and it gives me [False, True, 2] Why isn't 1 there? When I print exDict.items() it gives me [(False, 1), (True, 'a'), (2, 'b')] Anyone have a guess about what's going on here? I'm stumped.
This happens because True == 1 (and False == 0, but you didn't have 0 as a key). You'll have to refactor your code or data somehow, because a dict considers keys to be the same if they are "equal" (rather than is).
Times two faster than bit shift?
I was looking at the source of sorted_containers and was surprised to see this line: self._load, self._twice, self._half = load, load * 2, load >> 1 Here load is an integer. Why use bit shift in one place, and multiplication in another? It seems reasonable that bit shifting may be faster than integral division by 2, but why not replace the multiplication by a shift as well? I benchmarked the the following cases: (times, divide) (shift, shift) (times, shift) (shift, divide) and found that #3 is consistently faster than other alternatives: # self._load, self._twice, self._half = load, load * 2, load >> 1 import random import timeit import pandas as pd x = random.randint(10 ** 3, 10 ** 6) def test_naive(): a, b, c = x, 2 * x, x // 2 def test_shift(): a, b, c = x, x << 1, x >> 1 def test_mixed(): a, b, c = x, x * 2, x >> 1 def test_mixed_swaped(): a, b, c = x, x << 1, x // 2 def observe(k): print(k) return { 'naive': timeit.timeit(test_naive), 'shift': timeit.timeit(test_shift), 'mixed': timeit.timeit(test_mixed), 'mixed_swapped': timeit.timeit(test_mixed_swaped), } def get_observations(): return pd.DataFrame([observe(k) for k in range(100)]) The question: Is my test valid? If so, why is (multiply, shift) faster than (shift, shift)? I run Python 3.5 on Ubuntu 14.04. Edit Above is the original statement of the question. Dan Getz provides an excellent explanation in his answer. For the sake of completeness, here are sample illustrations for larger x when multiplication optimizations do not apply.
This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in the source code of Python's integer implementation. Because integers in Python are arbitrary-precision, they are stored as arrays of integer "digits", with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple "digits". In pyport.h, this bit limit is defined as 30 bits on 64-bit platform, or 15 bits otherwise. (I'll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark's result would depend on if x were less than 32,768 or not.) When an operation's inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the integer multiplication implementation is as follows: static PyObject * long_mul(PyLongObject *a, PyLongObject *b) { PyLongObject *z; CHECK_BINOP(a, b); /* fast path for single-digit multiplication */ if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) { stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b); #ifdef HAVE_LONG_LONG return PyLong_FromLongLong((PY_LONG_LONG)v); #else /* if we don't have long long then we're almost certainly using 15-bit digits, so v will fit in a long. In the unlikely event that we're using 30-bit digits on a platform without long long, a large v will just cause us to fall through to the general multiplication code below. */ if (v >= LONG_MIN && v <= LONG_MAX) return PyLong_FromLong((long)v); #endif } So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. (MEDIUM_VALUE() called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, PyLong_FromLongLong() will notice this in a relatively small number of operations, and create a single-digit integer object to store it. In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for long_lshift(), in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: (my comments in /*** ***/) static PyObject * long_lshift(PyObject *v, PyObject *w) { /*** ... ***/ wordshift = shiftby / PyLong_SHIFT; /*** zero for small w ***/ remshift = shiftby - wordshift * PyLong_SHIFT; /*** w for small w ***/ oldsize = Py_ABS(Py_SIZE(a)); /*** 1 for small v > 0 ***/ newsize = oldsize + wordshift; if (remshift) ++newsize; /*** here newsize becomes at least 2 for w > 0, v > 0 ***/ z = _PyLong_New(newsize); /*** ... ***/ } Integer division You didn't ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every // computes both the quotient and the remainder using the function long_divrem(). This remainder is computed for a small divisor with a multiplication, and is stored in a newly-allocated integer object, which in this situation is immediately discarded.
Is slicing really slower in Python 3.4?
This question and my answer got me thinking about this peculiar difference between Python 2.7 and Python 3.4. Take the simple example code: import timeit import dis c = 1000000 r = range(c) def slow(): for pos in range(c): r[pos:pos+3] dis.dis(slow) time = timeit.Timer(lambda: slow()).timeit(number=1) print('%3.3f' % time) In Python 2.7, I consistently get 0.165~ and for Python 3.4 I consistently get 0.554~. The only significant difference between the disassemblies is that Python 2.7 emits the SLICE+3 byte code while Python 3.4 emits BUILD_SLICE followed by BINARY_SUBSCR. Note that I've eliminated the candidates for potential slowdown from the other question, namely strings and the fact that xrange doesn't exist in Python 3.4 (which is supposed to be similar to the latter's range class anyways). Using itertools' islice yields nearly identical timings between the two, so I highly suspect that it's the slicing that's the cause of the difference here. Why is this happening and is there a link to an authoritative source documenting change in behavior? EDIT: In response to the answer, I have wrapped the range objects in list, which did give a noticeable speedup. However as I increased the number of iterations in timeit I noticed that the timing differences became larger and larger. As a sanity check, I replaced the slicing with None to see what would happen. 500 iterations in timeit. c = 1000000 r = list(range(c)) def slow(): for pos in r: None yields 10.688 and 9.915 respectively. Replacing the for loop with for pos in islice(r, 0, c, 3) yields 7.626 and 6.270 respectively. Replacing None with r[pos] yielded 20~ and 28~ respectively. r[pos:pos+3] yields 67.531 and 106.784 respectively. As you can see, the timing differences are huge. Again, I'm still convinced the issue is not directly related to range.
On Python 2.7, you're iterating over a list and slicing a list. On Python 3.4, you're iterating over a range and slicing a range. When I run a test with a list on both Python versions: from __future__ import print_function import timeit print(timeit.timeit('x[5:8]', setup='x = list(range(10))')) I get 0.243554830551 seconds on Python 2.7 and 0.29082867689430714 seconds on Python 3.4, a much smaller difference. The performance difference you see after eliminating the range object is much smaller. It comes primarily from two factors: addition is a bit slower on Python 3, and Python 3 needs to go through __getitem__ with a slice object for slicing, while Python 2 has __getslice__. I wasn't able to replicate the timing difference you saw with r[pos]; you may have had some confounding factor in that test.
How to *actually* read CSV data in TensorFlow?
I'm relatively new to the world of TensorFlow, and pretty perplexed by how you'd actually read CSV data into a usable example/label tensors in TensorFlow. The example from the TensorFlow tutorial on reading CSV data is pretty fragmented and only gets you part of the way to being able to train on CSV data. Here's my code that I've pieced together, based off that CSV tutorial: from __future__ import print_function import tensorflow as tf def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 filename = "csv_test_data.csv" # setup text reader file_length = file_len(filename) filename_queue = tf.train.string_input_producer([filename]) reader = tf.TextLineReader(skip_header_lines=1) _, csv_row = reader.read(filename_queue) # setup CSV decoding record_defaults = [[0],[0],[0],[0],[0]] col1,col2,col3,col4,col5 = tf.decode_csv(csv_row, record_defaults=record_defaults) # turn features back into a tensor features = tf.pack([col1,col2,col3,col4]) print("loading, " + str(file_length) + " line(s)\n") with tf.Session() as sess: tf.initialize_all_variables().run() # start populating filename queue coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(file_length): # retrieve a single instance example, label = sess.run([features, col5]) print(example, label) coord.request_stop() coord.join(threads) print("\ndone loading") And here is an brief example from the CSV file I'm loading - pretty basic data - 4 feature columns, and 1 label column: 0,0,0,0,0 0,15,0,0,0 0,30,0,0,0 0,45,0,0,0 All the code above does is print each example from the CSV file, one by one, which, while nice, is pretty darn useless for training. What I'm struggling with here is how you'd actually turn those individual examples, loaded one-by-one, into a training dataset. For example, here's a notebook I was working on in the Udacity Deep Learning course. I basically want to take the CSV data I'm loading, and plop it into something like train_dataset and train_labels: def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) I've tried using tf.train.shuffle_batch, like this, but it just inexplicably hangs: for i in range(file_length): # retrieve a single instance example, label = sess.run([features, colRelevant]) example_batch, label_batch = tf.train.shuffle_batch([example, label], batch_size=file_length, capacity=file_length, min_after_dequeue=10000) print(example, label) So to sum up, here are my questions: What am I missing about this process? It feels like there is some key intuition that I'm missing about how to properly build an input pipeline. Is there a way to avoid having to know the length of the CSV file? It feels pretty inelegant to have to know the number of lines you want to process (the for i in range(file_length) line of code above) Edit: As soon as Yaroslav pointed out that I was likely mixing up imperative and graph-construction parts here, it started to become clearer. I was able to pull together the following code, which I think is closer to what would typically done when training a model from CSV (excluding any model training code): from __future__ import print_function import numpy as np import tensorflow as tf import math as math import argparse parser = argparse.ArgumentParser() parser.add_argument('dataset') args = parser.parse_args() def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 def read_from_csv(filename_queue): reader = tf.TextLineReader(skip_header_lines=1) _, csv_row = reader.read(filename_queue) record_defaults = [[0],[0],[0],[0],[0]] colHour,colQuarter,colAction,colUser,colLabel = tf.decode_csv(csv_row, record_defaults=record_defaults) features = tf.pack([colHour,colQuarter,colAction,colUser]) label = tf.pack([colLabel]) return features, label def input_pipeline(batch_size, num_epochs=None): filename_queue = tf.train.string_input_producer([args.dataset], num_epochs=num_epochs, shuffle=True) example, label = read_from_csv(filename_queue) min_after_dequeue = 10000 capacity = min_after_dequeue + 3 * batch_size example_batch, label_batch = tf.train.shuffle_batch( [example, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue) return example_batch, label_batch file_length = file_len(args.dataset) - 1 examples, labels = input_pipeline(file_length, 1) with tf.Session() as sess: tf.initialize_all_variables().run() # start populating filename queue coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) try: while not coord.should_stop(): example_batch, label_batch = sess.run([examples, labels]) print(example_batch) except tf.errors.OutOfRangeError: print('Done training, epoch reached') finally: coord.request_stop() coord.join(threads)
I think you are mixing up imperative and graph-construction parts here. The operation tf.train.shuffle_batch creates a new queue node, and a single node can be used to process the entire dataset. So I think you are hanging because you created a bunch of shuffle_batch queues in your for loop and didn't start queue runners for them. Normal input pipeline usage looks like this: Add nodes like shuffle_batch to input pipeline (optional, to prevent unintentional graph modification) finalize graph --- end of graph construction, beginning of imperative programming -- tf.start_queue_runners while(True): session.run() To be more scalable (to avoid Python GIL), you could generate all of your data using TensorFlow pipeline. However, if performance is not critical, you can hook up a numpy array to an input pipeline by using slice_input_producer. Here's an example with some Print nodes to see what's going on (messages in Print go to stdout when node is run) tf.reset_default_graph() num_examples = 5 num_features = 2 data = np.reshape(np.arange(num_examples*num_features), (num_examples, num_features)) print data (data_node,) = tf.slice_input_producer([tf.constant(data)], num_epochs=1, shuffle=False) data_node_debug = tf.Print(data_node, [data_node], "Dequeueing from data_node ") data_batch = tf.batch([data_node_debug], batch_size=2) data_batch_debug = tf.Print(data_batch, [data_batch], "Dequeueing from data_batch ") sess = tf.InteractiveSession() sess.run(tf.initialize_all_variables()) tf.get_default_graph().finalize() tf.start_queue_runners() try: while True: print sess.run(data_batch_debug) except tf.errors.OutOfRangeError as e: print "No more inputs." You should see something like this [[0 1] [2 3] [4 5] [6 7] [8 9]] [[0 1] [2 3]] [[4 5] [6 7]] No more inputs. The "8, 9" numbers didn't fill up the full batch, so they didn't get produced. Also tf.Print are printed to sys.stdout, so they show up in separately in Terminal for me. PS: a minimal of connecting batch to a manually initialized queue is in github issue 2193 Also, for debugging purposes you might want to set timeout on your session so that your IPython notebook doesn't hang on empty queue dequeues. I use this helper function for my sessions def create_session(): config = tf.ConfigProto(log_device_placement=True) config.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM config.operation_timeout_in_ms=60000 # terminate on long hangs # create interactive session to register a default session sess = tf.InteractiveSession("", config=config) return sess Scalability Notes: tf.constant inlines copy of your data into the Graph. There's a fundamental limit of 2GB on size of Graph definition so that's an upper limit on size of data You could get around that limit by using v=tf.Variable and saving the data into there by running v.assign_op with a tf.placeholder on right-hand side and feeding numpy array to the placeholder (feed_dict) That still creates two copies of data, so to save memory you could make your own version of slice_input_producer which operates on numpy arrays, and uploads rows one at a time using feed_dict
How to add regularizations in TensorFlow?
I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value. My questions are: Is there a more elegant or recommended way of regularization than doing it manually? I also find that get_variable has an argument regularizer, how should it be used? According to my observation, if we pass a regularizer to it(such as tf.contrib.layers.l2_regularizer, a tensor representing regularized term will be computed and added to a graph collection named tf.GraphKeys.REGULARIZATOIN_LOSSES, will that collection be automatically used by TensorFlow(e.g. used by optimizers when training)? Or it's expected that I should use that collection by myself?
As you say in the second point, using the regularizer argument is the recommended way. You can use it in get_variable, or set it once in your variable_scope and have all your variables regularized. The losses are collected in the graph, and you need to manually add them to your cost function like this. reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_constant = 0.01 # Choose an appropriate one. loss = my_normal_loss + reg_constant * sum(reg_losses) Hope that helps!
Custom chained comparisons
Python allows expressions like x > y > z, which, according to the docs, is equivalent to (x > y) and (y > z) except y is only evaluated once. (https://docs.python.org/3/reference/expressions.html) However, this seems to break if I customize comparison functions. E.g. suppose I have the following class: (Apologies for the large block, but once you read the __eq__ method, the rest is trivial.) class CompareList(list): def __repr__(self): return "CompareList([" + ",".join(str(x) for x in self) + "])" def __eq__(self, other): if isinstance(other, list): return CompareList(self[idx] == other[idx] for idx in xrange(len(self))) else: return CompareList(x == other for x in self) def __ne__(self, other): if isinstance(other, list): return CompareList(self[idx] != other[idx] for idx in xrange(len(self))) else: return CompareList(x != other for x in self) def __gt__(self, other): if isinstance(other, list): return CompareList(self[idx] > other[idx] for idx in xrange(len(self))) else: return CompareList(x > other for x in self) def __ge__(self, other): if isinstance(other, list): return CompareList(self[idx] >= other[idx] for idx in xrange(len(self))) else: return CompareList(x >= other for x in self) def __lt__(self, other): if isinstance(other, list): return CompareList(self[idx] < other[idx] for idx in xrange(len(self))) else: return CompareList(x < other for x in self) def __le__(self, other): if isinstance(other, list): return CompareList(self[idx] <= other[idx] for idx in xrange(len(self))) else: return CompareList(x <= other for x in self) Now I can do fun stuff like CompareList([10, 5]) > CompareList([5, 10]) and it will correctly return CompareList([True,False]) However, chaining these operations doesn't work nicely: low = CompareList([1]) high = CompareList([2]) print(low > high > low) # returns CompareList([True]) Why not? What happens under the hood here? I know it isn't equivalent to (low > high) > low = (False > low) (because that would return False). It could be low > (high > low) but that wouldn't make sense in terms of operator precedence (normally left-to-right).
Python allows expressions like x > y > z, which, according to the docs, is equivalent to (x > y) and (y > z) except y is only evaluated once. According to this, low > high > low will be equivalent to (low > high) and (high > low). >>> x = low > high # CompareList([False]) >>> y = high > low # CompareList([True]) >>> x and y CompareList([True]) More from the documentation on x and y: x and y: if x is false, then x, else y In the above case: >>> x is False False >>> x if x is False else y # x and y CompareList([True]) so when you do x and y it returns the y which is CompareList([True]).
Why is it faster to break rather than to raise an exception?
After checking a few simple tests, it seems as if it might be faster to break from a loop to end a generator rather than to raise a StopIteration exception. Why is this the case if the standard and accepted method of stopping a generator is using the exception. source In [1]: def f(): ....: for i in range(1024): ....: yield None ....: break ....: In [2]: def g(): ....: for i in range(1024): ....: yield None ....: raise StopIteration ....: In [3]: %timeit for i in f(): pass 1000000 loops, best of 3: 1.22 µs per loop In [4]: %timeit for i in g(): pass 100000 loops, best of 3: 5.9 µs per loop In [5]: %timeit for i in f(): pass 1000000 loops, best of 3: 1.22 µs per loop In [6]: %timeit for i in g(): pass 100000 loops, best of 3: 5.82 µs per loop
Why is this the case if the standard and accepted method of stopping a generator is using the exception. The exception StopIteration is raised only when the generator has nothing to produce any more. And, it is not a standard way of stopping a generator midway. Here are two statements from the documentation on generators about how to stop them properly: PEP 479 -- Change StopIteration handling inside generators: ... the proposal also clears up the confusion about how to terminate a generator: the proper way is return , not raise StopIteration. PEP 255 -- Simple Generators Q. Why allow "return" at all? Why not force termination to be spelled "raise StopIteration"? A. The mechanics of StopIteration are low-level details, much like the mechanics of IndexError in Python 2.1: the implementation needs to do something well-defined under the covers, and Python exposes these mechanisms for advanced users. That's not an argument for forcing everyone to work at that level, though. "return" means "I'm done" in any kind of function, and that's easy to explain and to use. Note that "return" isn't always equivalent to "raise StopIteration" in try/except construct, either (see the "Specification: Return" section). So the correct way would be to use a return statement instead of using break or raise StopIteration. it seems as if it might be faster to break from a loop to end a generator rather than to raise a StopIteration exception. Indeed it is because when raising the exception there is more job to do. You can use the dis module to get a look at the bytecode: In [37]: dis.dis(f) 2 0 SETUP_LOOP 26 (to 29) 3 LOAD_GLOBAL 0 (range) 6 LOAD_CONST 1 (1024) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 12 (to 28) 16 STORE_FAST 0 (i) 3 19 LOAD_CONST 0 (None) 22 YIELD_VALUE 23 POP_TOP 4 24 BREAK_LOOP 25 JUMP_ABSOLUTE 13 >> 28 POP_BLOCK >> 29 LOAD_CONST 0 (None) 32 RETURN_VALUE In [38]: dis.dis(g) 2 0 SETUP_LOOP 31 (to 34) 3 LOAD_GLOBAL 0 (range) 6 LOAD_CONST 1 (1024) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 17 (to 33) 16 STORE_FAST 0 (i) 3 19 LOAD_CONST 0 (None) 22 YIELD_VALUE 23 POP_TOP 4 24 LOAD_GLOBAL 2 (StopIteration) 27 RAISE_VARARGS 1 30 JUMP_ABSOLUTE 13 >> 33 POP_BLOCK >> 34 LOAD_CONST 0 (None) 37 RETURN_VALUE You can see that almost everything is same but for raising the exception, it has to execute some extra instructions: 24 LOAD_GLOBAL 2 (StopIteration) 27 RAISE_VARARGS 1
Why does a generator using `()` need a lot of memory?
Problem Let's assume that I want to find n**2 for all numbers smaller than 20000000. General setup for all three variants that I test: import time, psutil, gc gc.collect() mem_before = psutil.virtual_memory()[3] time1 = time.time() # (comprehension, generator, function)-code comes here time2 = time.time() mem_after = psutil.virtual_memory()[3] print "Used Mem = ", (mem_after - mem_before)/(1024**2) # convert Byte to Megabyte print "Calculation time = ", time2 - time1 Three options to calculate these numbers: 1. Creating a list of via comprehension: x = [i**2 for i in range(20000000)] It is really slow and time consuming: Used Mem = 1270 # Megabytes Calculation time = 33.9309999943 # Seconds 2. Creating a generator using '()': x = (i**2 for i in range(20000000)) It is much faster than option 1, but still uses a lot of memory: Used Mem = 611 Calculation time = 0.278000116348 3. Defining a generator function (most efficient): def f(n): i = 0 while i < n: yield i**2 i += 1 x = f(20000000) Its consumption: Used Mem = 0 Calculation time = 0.0 The questions are: What's the difference between the first and second solutions? Using () creates a generator, so why does it need a lot of memory? Is there any built-in function equivalent to my third option?
As others have pointed out in the comments, range creates a list in Python 2. Hence, it is not the generator per se that uses up the memory, but the range that the generator uses: x = (i**2 for i in range(20000000)) # builds a 2*10**7 element list, not for the squares , but for the bases >>> sys.getsizeof(range(100)) 872 >>> sys.getsizeof(xrange(100)) 40 >>> sys.getsizeof(range(1000)) 8720 >>> sys.getsizeof(xrange(1000)) 40 >>> sys.getsizeof(range(20000000)) 160000072 >>> sys.getsizeof(xrange(20000000)) 40 This also explains why your second version (the generator expression) uses around half the memory of the first version (the list comprehension) as the first one builds two lists (for the bases and the squares) while the second only builds one list for the bases. xrange(20000000) thus, greatly improves memory usage as it returns a lazy iterable. This is essentially the built-in memory efficient way to iterate over a range of numbers that mirrors your third version (with the added flexibility of start, stop and step): x = (i**2 for i in xrange(20000000)) In Python 3, range is essentially what xrange used to be in Python 2. However, the Python 3 range object has some nice features that Python 2's xrange doesn't have, like O(1) slicing, contains, etc. Some references: Python2 xrange docs Python3 range docs Stack Overflow - "Should you always favor xrange() over range()?" Martijn Pieters excellent answer to "Why is 1000000000000000 in range(1000000000000001) so fast in Python 3?"
Choice made by Python 3.5 to choose the keys when comparing them in a dictionary
When constructing a dictionary as follows: dict = { True: 'yes', 1: 'No'} When I run it in the interactive Python interpreter the dict is represented this way: dict = {True: 'No'} I understand that the values True and 1 are equal due to the type coercion because when comparing numeric types, the narrowed type is widened to the other type (boolean is a child of integer). So as I understood from the documentation, when we enter True == 1 Python converts True to 1 and compares them. What I don't understand is why the True is selected as a key instead of 1. I am missing something?
Dictionaries are implemented as hash tables and there are two important concepts when adding keys/values here: hashing and equality. To insert a particular key/value, Python first computes the hash value of the key. This hash value is used to determine the row of the table where Python should first attempt to put the key/value. If the row of the hash table is empty, great: the new key/value can inserted into the dictionary, filling the empty row. However, if there's already something in that row, Python needs to test the keys for equality. If the keys are equal (using ==) then they're deemed to be the same key and Python just needs to update the corresponding value on that row. (If the keys are not equal Python looks at other rows in the table until it finds the key or reaches an empty row, but that's not relevant for this question.) When you write {True: 'yes', 1: 'No'}, you are telling Python to create a new dictionary and then fill it with two key/value pairs. These are processed left to right: True: 'yes' then 1: 'No'. We have hash(True) equals 1. The key True goes in at row 1 in the hash table and the string 'yes' is its value. For the next pair, Python sees that hash(1) is also 1 and so looks at row 1 of the table. There's something already there, so now Python checks the keys for equality. We have 1 == True so 1 is deemed to be the same key as True and so its corresponding value is changed to the string 'No'. This results in a dictionary with one entry: {True: 'No'}. If you want to peer at the guts of CPython 3.5 to see what creating a dictionary looks below the surface-Python level, here's more detail. The Python code {True: 'yes', 1: 'No'} is parsed into tokens and given to the compiler. Given the syntax, Python knows that a dictionary must be created using the values inside the braces. Byte code to load the four values onto the virtual machine's stack (LOAD_CONST) and then build the dictionary (BUILD_MAP) is queued up. The four constant values are pushed onto the top of the stack in the order that they're seen: 'No' 1 'yes' True The opcode BUILD_MAP is then called with the argument 2 (Python counted two key/value pairs). This opcode is responsible for actually creating dictionary from the items on the stack. It looks like this: TARGET(BUILD_MAP) { int i; PyObject *map = _PyDict_NewPresized((Py_ssize_t)oparg); if (map == NULL) goto error; for (i = oparg; i > 0; i--) { int err; PyObject *key = PEEK(2*i); PyObject *value = PEEK(2*i - 1); err = PyDict_SetItem(map, key, value); if (err != 0) { Py_DECREF(map); goto error; } } while (oparg--) { Py_DECREF(POP()); Py_DECREF(POP()); } PUSH(map); DISPATCH(); } The three key steps here are as follows: An empty hashtable is created using _PyDict_NewPresized. Small dictionaries (of just a few items, like 2 in this case) need a table with eight rows. The for loop is entered, starting at 2 (in this case) and counting down to 0. PEEK(n) is a macro that points to the nth item down the stack. Therefore on the first iteration of the loop, we'll have PyObject *key = PEEK(2*2); /* item 4 down the stack */ PyObject *value = PEEK(2*2 - 1); /* item 3 down the stack */ This means that *key will be True and *value will be 'yes' on the first loop through. On the second it will be 1 and 'No'. PyDict_SetItem is called in each loop to put the current *key and *value into the dictionary. This is the same function that is called when you write dictionary[key] = value. It computes the hash of the key to work out where to look first in the hash table and then, if needed, compare the key to any existing key on that row (as discussed above).
Why does date + timedelta become date, not datetime?
In Python, in an operation of numbers of mixed type, the narrower type is widened to that of the other, such as int + float → float: In [57]: 3 + 0.1 Out[57]: 3.1 But for datetime.date, we have datetime.date + datetime.timedelta → datetime.date, not datetime.datetime: In [58]: datetime.date(2013, 1, 1) + datetime.timedelta(seconds=42) Out[58]: datetime.date(2013, 1, 1) Why is the widening reasoning applied to numbers, but not to date/datetime/timedelta? (Background: I'm writing a reading routine for a file format where one field is year, one field is day-of-year, one field is milliseconds-since-midnight. Of course, the simple and explicit solution is datetime.datetime(2013, 1, 1, 0, 0, 0) + datetime.timedelta(seconds=42), but one could equally reason that one should rewrite 3 + 0.1 as 3.0 + 0.1)
The behaviour is documented: date2 is moved forward in time if timedelta.days > 0, or backward if timedelta.days < 0. Afterward date2 - date1 == timedelta.days. timedelta.seconds and timedelta.microseconds are ignored. (My emphasis. This behaviour has remained unchanged since date objects were added in Python 2.3.) I haven't been able to find any evidence as to why the module is designed like this. Certainly there are use cases like yours where you want to represent the point in time corresponding to the midnight at the start of a day. In these cases it is annoying to have to convert back and forth. But there are other use cases in which you want to represent a whole day (and not just some point in time on that day), in which case you don't want to accidentally end up with partial days when you add timedeltas. Chris Withers suggested that the behaviour be changed, in issue 3249, but Tim Peters noted that: an incompatible change to documented always-worked-this-way behavior is unlikely to be accepted. If you want an object that behaves like a datetime.date, but where arithmetic operations return datetime.datetime objects, then it shouldn't be not too hard to write one: from datetime import date, datetime, time, timedelta def _part_day(t): """Return True if t is a timedelta object that does not consist of whole days. """ return isinstance(t, timedelta) and (t.seconds or t.microseconds) class mydate(date): """Subclass of datetime.date where arithmetic operations with a timedelta object return a datetime.datetime object unless the timedelta object consists of whole days. """ def datetime(self): """Return datetime corresponding to the midnight at start of this date. """ return datetime.combine(self, time()) def __add__(self, other): if _part_day(other): return self.datetime() + other else: return super().__add__(other) __radd__ = __add__ def __sub__(self, other): if _part_day(other): return self.datetime() - other else: return super().__sub__(other) (This is untested, but it's shouldn't be hard to get it working from here.)
Recursively search for parent child combinations and build tree in python and XML
I am trying to traverse this XML data full of parent->child relationships and need a way to build a tree. Any help will be really appreciated. Also, in this case, is it better to have attributes or nodes for the parent-->child relationship? <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <nodes> <node name="Car" child="Engine"/> <node name="Car" child="Wheel"/> <node name="Engine" child="Piston"/> <node name="Engine" child="Carb"/> <node name="Carb" child="Bolt"/> <node name="Spare Wheel"/> <node name="Bolt" child="Thread"/> <node name="Carb" child="Foat"/> <node name="Truck" child="Engine"/> <node name="Engine" child="Bolt"/> <node name="Wheel" child="Hubcap"/> </nodes> On the Python Script, this is what i have. My brain is fried and I cannot get the logic going? please help import xml.etree.ElementTree as ET tree = ET.parse('rec.xml') root = tree.getroot() def find_node(data,search): #str = root.find('.//node[@child="1.2.1"]') for node in data.findall('.//node'): if node.attrib['name']==search: print('Child-->', node) for nodes in root.findall('node'): parent = nodes.attrib.get('name') child = nodes.attrib.get('child') print (parent,'-->', child) find_node(root,child) A possible output that is expected is something like this (really dont care about the sorting order, As long as all node items are represented somewhere in the tree. Car --> Engine --> Piston Car --> Engine --> Carb --> Float Car --> Engine --> Carb --> Bolt --> Thread Car --> Wheel --> Hubcaps Truck --> Engine --> Piston Truck --> Engine --> Carb --> Bolt --> Thread Truck --> Loading Bin Spare Wheel -->
It has been a long time since I did anything with graphs but this should be pretty close it not the most optimal approach: x = """<?xml version="1.0"?> <nodes> <node name="Car" child="Engine"></node> <node name="Engine" child="Piston"></node> <node name="Engine" child="Carb"></node> <node name="Car" child="Wheel"></node> <node name="Wheel" child="Hubcaps"></node> <node name="Truck" child="Engine"></node> <node name="Truck" child="Loading Bin"></node> <nested> <node name="Spare Wheel" child="Engine"></node> </nested> <node name="Spare Wheel" child=""></node> </nodes>""" from lxml import etree xml = etree.fromstring(x) graph = {} nodes = set() for x in xml.xpath("//node"): par, child = x.xpath(".//@name")[0], x.xpath(".//@child")[0] graph.setdefault(par, set()) graph[par].add(child) nodes.update([child, par]) def find_all_paths(graph, start, end, path=None): if path is None: path = [] path = path + [start] if start == end: yield path for node in graph.get(start, []): if node not in path: for new_path in find_all_paths(graph, node, end, path): yield new_path for n in graph: for e in nodes: if n != e: for path in find_all_paths(graph, n, e): if path: print("--> ".join(path)) Which with the updated input would give you: Engine--> Carb Engine--> Piston Car--> Engine Car--> Wheel Car--> Wheel--> Hubcaps Car--> Engine--> Carb Car--> Engine--> Piston Spare Wheel--> Engine Spare Wheel--> Spare Wheel--> Engine--> Carb Spare Wheel--> Engine--> Piston Wheel--> Hubcaps Truck--> Engine Truck--> Engine--> Carb Truck--> Engine--> Piston Truck--> Loading Bin
Optimization of arithmetic expressions - what is this technique called?
A discussion with a friend led to the following realization: >>> import dis >>> i = lambda n: n*24*60*60 >>> dis.dis(i) 1 0 LOAD_FAST 0 (n) 3 LOAD_CONST 1 (24) 6 BINARY_MULTIPLY 7 LOAD_CONST 2 (60) 10 BINARY_MULTIPLY 11 LOAD_CONST 2 (60) 14 BINARY_MULTIPLY 15 RETURN_VALUE >>> k = lambda n: 24*60*60*n >>> dis.dis(k) 1 0 LOAD_CONST 4 (86400) 3 LOAD_FAST 0 (n) 6 BINARY_MULTIPLY 7 RETURN_VALUE The second example is clearly more efficient simply by reducing the number of instructions. My question is, is there a name for this optimization, and why doesn't it happen in the first example? Also, I'm not sure if this is a duplicate of Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)? ; if it is please explain a bit further as it applies to Python.
This optimization technique is called constant folding. The reason for constant folding occurring in the latter code but not in the former is that Python has dynamic typing, and while in mathematics a product of real numbers is commutative and freely associative, it is not so in Python in the general case, because neither do all variables contain real numbers, nor can one know the types beforehand. Multiplication in Python is left-associative - 24 * 60 * 60 * n behaves like (((24 * 60) * 60) * n), which in turn implicitly executes like (24).__mul__(60).__mul__(60).__mul__(n) or (n).__rmul_((24).__mul__(60).__mul__(60)) whereas n * 24 * 60 * 60 which is (((n * 24) * 60) * 60) can behave like n.__mul__(24).__mul__(60).__mul__(60) or (24).__rmul__(n).__mul__(60).__mul__(60) Since we cannot know the behaviour of n.__mul__ beforehand, we cannot fold a constant in the latter case. Consider this example of a funny class that returns a subclass of int that defines __mul__/__rmul__ as returning the sum of the operands instead of product: class MultiplyAsAdd(int): def __mul__(self, other): return MultiplyAsAdd(self + other) def __rmul__(self, other): return MultiplyAsAdd(other + self) Then >>> (lambda n: 24*60*60*n)(MultiplyAsAdd(5)) 86405 >>> (lambda n: n*24*60*60)(MultiplyAsAdd(5)) 149 Clearly it'd be wrong for Python to parenthesize the product as n*(24*60*60) in the latter case.
Using Deep Learning to Predict Subsequence from Sequence
I have a data that looks like this: It can be viewed here and has been included in the code below. In actuality I have ~7000 samples (row), downloadable too. The task is given antigen, predict the corresponding epitope. So epitope is always an exact substring of antigen. This is equivalent with the Sequence to Sequence Learning. Here is my code running on Recurrent Neural Network under Keras. It was modeled according the example. My question are: Can RNN, LSTM or GRU used to predict subsequence as posed above? How can I improve the accuracy of my code? How can I modify my code so that it can run faster? Here is my running code which gave very bad accuracy score. #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import print_function import sys import json import pandas as pd from keras.models import Sequential from keras.engine.training import slice_X from keras.layers.core import Activation, RepeatVector, Dense from keras.layers import recurrent, TimeDistributed import numpy as np from six.moves import range class CharacterTable(object): ''' Given a set of characters: + Encode them to a one hot integer representation + Decode the one hot integer representation to their character output + Decode a vector of probabilties to their character output ''' def __init__(self, chars, maxlen): self.chars = sorted(set(chars)) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) self.maxlen = maxlen def encode(self, C, maxlen=None): maxlen = maxlen if maxlen else self.maxlen X = np.zeros((maxlen, len(self.chars))) for i, c in enumerate(C): X[i, self.char_indices[c]] = 1 return X def decode(self, X, calc_argmax=True): if calc_argmax: X = X.argmax(axis=-1) return ''.join(self.indices_char[x] for x in X) class colors: ok = '\033[92m' fail = '\033[91m' close = '\033[0m' INVERT = True HIDDEN_SIZE = 128 BATCH_SIZE = 64 LAYERS = 3 # Try replacing GRU, or SimpleRNN RNN = recurrent.LSTM def main(): """ Epitope_core = answers Antigen = questions """ epi_antigen_df = pd.io.parsers.read_table("http://dpaste.com/2PZ9WH6.txt") antigens = epi_antigen_df["Antigen"].tolist() epitopes = epi_antigen_df["Epitope Core"].tolist() if INVERT: antigens = [ x[::-1] for x in antigens] allchars = "".join(antigens+epitopes) allchars = list(set(allchars)) aa_chars = "".join(allchars) sys.stderr.write(aa_chars + "\n") max_antigen_len = len(max(antigens, key=len)) max_epitope_len = len(max(epitopes, key=len)) X = np.zeros((len(antigens),max_antigen_len, len(aa_chars)),dtype=np.bool) y = np.zeros((len(epitopes),max_epitope_len, len(aa_chars)),dtype=np.bool) ctable = CharacterTable(aa_chars, max_antigen_len) sys.stderr.write("Begin vectorization\n") for i, antigen in enumerate(antigens): X[i] = ctable.encode(antigen, maxlen=max_antigen_len) for i, epitope in enumerate(epitopes): y[i] = ctable.encode(epitope, maxlen=max_epitope_len) # Shuffle (X, y) in unison as the later parts of X will almost all be larger digits indices = np.arange(len(y)) np.random.shuffle(indices) X = X[indices] y = y[indices] # Explicitly set apart 10% for validation data that we never train over split_at = len(X) - len(X) / 10 (X_train, X_val) = (slice_X(X, 0, split_at), slice_X(X, split_at)) (y_train, y_val) = (y[:split_at], y[split_at:]) sys.stderr.write("Build model\n") model = Sequential() # "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE # note: in a situation where your input sequences have a variable length, # use input_shape=(None, nb_feature). model.add(RNN(HIDDEN_SIZE, input_shape=(max_antigen_len, len(aa_chars)))) # For the decoder's input, we repeat the encoded input for each time step model.add(RepeatVector(max_epitope_len)) # The decoder RNN could be multiple layers stacked or a single layer for _ in range(LAYERS): model.add(RNN(HIDDEN_SIZE, return_sequences=True)) # For each of step of the output sequence, decide which character should be chosen model.add(TimeDistributed(Dense(len(aa_chars)))) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Train the model each generation and show predictions against the validation dataset for iteration in range(1, 200): print() print('-' * 50) print('Iteration', iteration) model.fit(X_train, y_train, batch_size=BATCH_SIZE, nb_epoch=5, validation_data=(X_val, y_val)) ### # Select 10 samples from the validation set at random so we can visualize errors for i in range(10): ind = np.random.randint(0, len(X_val)) rowX, rowy = X_val[np.array([ind])], y_val[np.array([ind])] preds = model.predict_classes(rowX, verbose=0) q = ctable.decode(rowX[0]) correct = ctable.decode(rowy[0]) guess = ctable.decode(preds[0], calc_argmax=False) # print('Q', q[::-1] if INVERT else q) print('T', correct) print(colors.ok + '☑' + colors.close if correct == guess else colors.fail + '☒' + colors.close, guess) print('---') if __name__ == '__main__': main()
Can RNN, LSTM or GRU used to predict subsequence as posed above? Yes, you can use any of these. LSTMs and GRUs are types of RNNs; if by RNN you mean a fully-connected RNN, these have fallen out of favor because of the vanishing gradients problem (1, 2). Because of the relatively small number of examples in your dataset, a GRU might be preferable to an LSTM due to its simpler architecture. How can I improve the accuracy of my code? You mentioned that training and validation error are both bad. In general, this could be due to one of several factors: The learning rate is too low (not an issue since you're using Adam, a per-parameter adaptive learning rate algorithm) The model is too simple for the data (not at all the issue, since you have a very complex model and a small dataset) You have vanishing gradients (probably the issue since you have a 3-layer RNN). Try reducing the number of layers to 1 (in general, it's good to start by getting a simple model working and then increase the complexity), and also consider hyperparameter search (e.g. a 128-dimensional hidden state may be too large - try 30?). Another option, since your epitope is a substring of your input, is to predict the start and end indices of the epitope within the antigen sequence (potentially normalized by the length of the antigen sequence) instead of predicting the substring one character at a time. This would be a regression problem with two tasks. For instance, if the antigen is FSKIAGLTVT (10 letters long) and its epitope is KIAGL (positions 3 to 7, one-based) then the input would be FSKIAGLTVT and the outputs would be 0.3 (first task) and 0.7 (second task). Alternatively, if you can make all the antigens be the same length (by removing parts of your dataset with short antigens and/or chopping off the ends of long antigens assuming you know a priori that the epitope is not near the ends), you can frame it as a classification problem with two tasks (start and end) and sequence-length classes, where you're trying to assign a probability to the antigen starting and ending at each of the positions. How can I modify my code so that it can run faster? Reducing the number of layers will speed your code up significantly. Also, GRUs will be faster than LSTMs due to their simpler architecture. However, both types of recurrent networks will be slower than, e.g. convolutional networks. Feel free to send me an email (address in my profile) if you're interested in a collaboration.
How to have list() consume __iter__ without calling __len__?
I have a class with both an __iter__ and a __len__ methods. The latter uses the former to count all elements. It works like the following: class A: def __iter__(self): print("iter") for _ in range(5): yield "something" def __len__(self): print("len") n = 0 for _ in self: n += 1 return n Now if we take e.g. the length of an instance it prints len and iter, as expected: >>> len(A()) len iter 5 But if we call list() it calls both __iter__ and __len__: >>> list(A()) len iter iter ['something', 'something', 'something', 'something', 'something'] It works as expected if we make a generator expression: >>> list(x for x in A()) iter ['something', 'something', 'something', 'something', 'something'] I would assume list(A()) and list(x for x in A()) to work the same but they don’t. Note that it appears to first call __iter__, then __len__, then loop over the iterator: class B: def __iter__(self): print("iter") def gen(): print("gen") yield "something" return gen() def __len__(self): print("len") return 1 print(list(B())) Output: iter len gen ['something'] How can I get list() not to call __len__ so that my instance’s iterator is not consumed twice? I could define e.g. a length or size method and one would then call A().size() but that’s less pythonic. I tried to compute the length in __iter__ and cache it so that subsequent calls to __len__ don’t need to iter again but list() calls __len__ without starting to iterate so it doesn’t work. Note that in my case I work on very large data collections so caching all items is not an option.
It's a safe bet that the list() constructor is detecting that len() is available and calling it in order to pre-allocate storage for the list. Your implementation is pretty much completely backwards. You are implementing __len__() by using __iter__(), which is not what Python expects. The expectation is that len() is a fast, efficient way to determine the length in advance. I don't think you can convince list(A()) not to call len. As you have already observed, you can create an intermediate step that prevents len from being called. You should definitely cache the result, if the sequence is immutable. If there are as many items as you speculate, there's no sense computing len more than once.
In Python: How to remove an object from a list if it is only referenced in that list?
I want to keep track of objects of a certain type that are currently in use. For example: Keep track of all instances of a class or all classes that have been created by a metaclass. It is easy to keep track of instances like this: class A(): instances = [] def __init__(self): self.instances.append(self) But if an instance is not referenced anywhere outside of that list it will not be needed anymore and I do not want to process that instance in a potentially time consuming loop. I tried to remove objects that are only referenced in the list using sys.getrefcount. for i in A.instances: if sys.getrefcount(i) <=3: # in the list, in the loop and in getrefcount # collect and remove after the loop The problem I have is that the reference count is very obscure. Opening a new shell and creating a dummy class with no content returns 5 for sys.getrefcount(DummyClass) Another idea is to copy the objects then deleting the list and checking which objects have been scheduled for garbage collecting and in the last step removing those objects. Something like: Copy = copy(A.instances) del A.instances A.instances = [i for i in Copy if not copy_of_i_is_in_GC(i)] The objects don't have to be removed immediately when the reference count goes to 0. I just don't want to waste too much ressources on objects that are not used anymore.
This answer is the same as Kevin's but I was working up an example implementation with weak references and am posting it here. Using weak references solves the problem where an object is referenced by the self.instance list, so it will never be deleted. One of the things about creating a weak reference for an object is that you can include a callback when the object is deleted. There are issues such as the callback not happening when the program exits... but that may be what you want anyway. import threading import weakref class A(object): instances = [] lock = threading.RLock() @classmethod def _cleanup_ref(cls, ref): print('cleanup') # debug with cls.lock: try: cls.instances.remove(ref) except ValueError: pass def __init__(self): with self.lock: self.instances.append(weakref.ref(self, self._cleanup_ref)) # test test = [A() for _ in range(3)] for i in range(3,-1,-1): assert len(A.instances) == i if test: test.pop() print("see if 3 are removed at exit") test = [A() for _ in range(3)]
ImportError: cannot import name NUMPY_MKL
I am trying to run the following simple code import scipy scipy.test() But I am getting the following error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 586, in runfile execfile(filename, namespace) File "C:/Users/Mustafa/Documents/My Python Code/SpectralGraphAnalysis/main.py", line 8, in <module> import scipy File "C:\Python27\lib\site-packages\scipy\__init__.py", line 61, in <module> from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl ImportError: cannot import name NUMPY_MKL I am using python 2.7 under windows 10. I have installed scipy but that does not seem to solve the problem Any help is appreciated.
If you look at the line which is causing the error, you'll see this: from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl This line comment states the dependency as numpy+mkl (numpy with Intel Math Kernel Library). This means that you've installed the numpy by pip, but the scipy was installed by precompiled archive, which expects numpy+mkl. This problem can be easy solved by installation for numpy+mkl from whl file from here.
"Fire and forget" python async/await
Sometimes there is some non-critical asynchronous operation that needs to happen but I don't want to wait for it to complete. In Tornado's coroutine implementation you can "fire & forget" an asynchronous function by simply ommitting the yield key-word. I've been trying to figure out how to "fire & forget" with the new async/await syntax released in Python 3.5. E.g., a simplified code snippet: async def async_foo(): print("Do some stuff asynchronously here...") def bar(): async_foo() # fire and forget "async_foo()" bar() What happens though is that bar() never executes and instead we get a runtime warning: RuntimeWarning: coroutine 'async_foo' was never awaited async_foo() # fire and forget "async_foo()"
asyncio.Task to “fire and forget” asyncio.Task is a way to start some coroutine to executing "in background". Task created by asyncio.ensure_future function wouldn't block execution (function always return immediately). Looks like a way to “fire and forget” you search. import asyncio async def async_foo(): print("async_foo started") await asyncio.sleep(1) print("async_foo done") async def main(): asyncio.ensure_future(async_foo()) # fire and forget async_foo() # btw, you can also create tasks inside non-async funcs print('Do some actions 1') await asyncio.sleep(1) print('Do some actions 2') await asyncio.sleep(1) print('Do some actions 3') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) Output: Do some actions 1 async_foo started Do some actions 2 async_foo done Do some actions 3 What if tasks are executing after event loop complete? Note that asyncio expects task would be completed at the moment event loop completed. So if you'll change main() to: async def main(): asyncio.ensure_future(async_foo()) # fire and forget print('Do some actions 1') await asyncio.sleep(0.1) print('Do some actions 2') you'll get warning, after program finished: Task was destroyed but it is pending! task: <Task pending coro=<async_foo() running at [...] To prevent that you can just await all pending tasks after event loop completed: async def main(): asyncio.ensure_future(async_foo()) # fire and forget print('Do some actions 1') await asyncio.sleep(0.1) print('Do some actions 2') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) # Let's also finish all running tasks: pending = asyncio.Task.all_tasks() loop.run_until_complete(asyncio.gather(*pending)) Kill tasks instead of awaiting them Sometimes you don't want to await tasks done (for example, some tasks may be created to run forever). In that case, you can just cancel() them instead of awaiting: import asyncio from contextlib import suppress async def echo_forever(): while True: print("echo") await asyncio.sleep(1) async def main(): asyncio.ensure_future(echo_forever()) # fire and forget print('Do some actions 1') await asyncio.sleep(1) print('Do some actions 2') await asyncio.sleep(1) print('Do some actions 3') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) # Let's also cancel all running tasks: pending = asyncio.Task.all_tasks() for task in pending: task.cancel() # Now we should await task to execute it's cancellation. # Cancelled task raises asyncio.CancelledError that we can suppress: with suppress(asyncio.CancelledError): loop.run_until_complete(task) Output: Do some actions 1 echo Do some actions 2 echo Do some actions 3 echo
ScrapyRT vs Scrapyd
We've been using Scrapyd service for a while up until now. It provides a nice wrapper around a scrapy project and its spiders letting to control the spiders via an HTTP API: Scrapyd is a service for running Scrapy spiders. It allows you to deploy your Scrapy projects and control their spiders using a HTTP JSON API. But, recently, I've noticed another "fresh" package - ScrapyRT that, according to the project description, sounds very promising and similar to Scrapyd: HTTP server which provides API for scheduling Scrapy spiders and making requests with spiders. Is this package an alternative to Scrapyd? If yes, what is the difference between the two?
They don't have thaaat much in common. As you have already seen you have to deploy your spiders to scrapyd and then schedule crawls. scrapyd is a standalone service running on a server where you can deploy and run every project/spider you like. With ScrapyRT you choose one of your projects and you cd to that directory. Then you run e.g. scrapyrt and you start crawls for spiders on that project through a simple (and very similar to scrapyd's) REST API. Then you get crawled items back as part of the JSON response. It's a very nice idea and it looks fast, lean and well defined. Scrapyd on the other hand is more mature and more generic. Here are some key differences: Scrapyd supports multiple versions of spiders and multiple projects. As far as I can see if you want to run two different projects (or versions) with ScrapyRT you will have to use different ports for each. Scrapyd provides infrastructure for keeping items in the server while ScrapyRT sends them back to you on the response which, for me, means that they should be in the order of a few MBs (instead of potentially GBs.) Similarly, the way logging is handled in scrapyd is more generic when compared to ScrapyRT. Scrapyd (potentially persistently) queues jobs and gives you control over the number of Scrapy processes that run in parallel. ScrapyRT does something simple which as far as I can tell is to start a crawl for every request as soon as the request arrives. Blocking code in one of the spiders will block others as well. ScrapyRT requires an url argument which as far as I can tell overrides any start_urls-related logic. I would say that ScrapyRT and Scrapyd very cleverly don't overlap at this point in time. Of course you never know what future holds.
Attributes of Python module `this`
Typing import this returns Tim Peters' Zen of Python. But I noticed that there are 4 properties on the module: this.i this.c this.d this.s I can see that the statement print(''.join(this.d.get(el, el) for el in this.s)) uses this.d to decode this.s to print the Zen. But can someone tell me what the attributes this.i and this.c are for? I assume they are there intentionally - answers to this question seem to suggest there are other jokes to be gleaned from the wording of the Zen. I'm wondering if there is a reference I'm missing with these 2 values. I noticed that the values differ between Python versions: # In v3.5: this.c Out[2]: 97 this.i Out[3]: 25 # In v2.6 this.c Out[8]: '!' this.i Out[9]: 25
i and c are simply loop variables, used to build the d dictionary. From the module source code: d = {} for c in (65, 97): for i in range(26): d[chr(i+c)] = chr((i+13) % 26 + c) This builds a ROT-13 mapping; each ASCII letter (codepoints 65 through 90 for uppercase, 97 through 122 for lowercase) is mapped to another ASCII letter 13 spots along the alphabet (looping back to A and onwards). So A (ASCII point 65) is mapped to N and vice versa (as well as a mapped to n): >>> c, i = 65, 0 >>> chr(i + c) 'A' >>> chr((i + 13) % 26 + c) 'N' Note that if you wanted to ROT-13 text yourself, there is a simpler method; just encode or decode with the rot13 codec: >>> this.s "Gur Mra bs Clguba, ol Gvz Crgref\n\nOrnhgvshy vf orggre guna htyl.\nRkcyvpvg vf orggre guna vzcyvpvg.\nFvzcyr vf orggre guna pbzcyrk.\nPbzcyrk vf orggre guna pbzcyvpngrq.\nSyng vf orggre guna arfgrq.\nFcnefr vf orggre guna qrafr.\nErnqnovyvgl pbhagf.\nFcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf.\nNygubhtu cenpgvpnyvgl orngf chevgl.\nReebef fubhyq arire cnff fvyragyl.\nHayrff rkcyvpvgyl fvyraprq.\nVa gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff.\nGurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg.\nNygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu.\nAbj vf orggre guna arire.\nNygubhtu arire vf bsgra orggre guna *evtug* abj.\nVs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn.\nVs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn.\nAnzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!" >>> import codecs >>> codecs.decode(this.s, 'rot13') "The Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!" As for the difference in Python 2.6 (or Python 2.7 for that matter) versus Python 3.5; the same variable name c is also used in the list comprehension in the str.join() call: print "".join([d.get(c, c) for c in s]) In Python 2, list comprehensions do not get their own scope (unlike generator expressions and dict and set comprehensions). In Python 3 they do, and the c value in the list comprehension is no longer part of the module namespace. So the last value assigned to c at the module scope is 97 in Python 3, and this.s[-1] (so a '!') in Python 2. See Why do list comprehensions write to the loop variable, but generators don't? There is no joke embedded in these 1-letter variable names. There are jokes in the Zen itself. Like the fact that between the source code for the this module and the text itself you can find violations for just about all the rules!
What's the meaning of "(1,) == 1," in Python?
I'm testing the tuple structure, and I found it's strange when I use the == operator like: >>> (1,) == 1, Out: (False,) When I assign these two expressions to a variable, the result is true: >>> a = (1,) >>> b = 1, >>> a==b Out: True This questions is different from Python tuple trailing comma syntax rule in my view. I ask the group of expressions between == operator.
This is just operator precedence. Your first (1,) == 1, groups like so: ((1,) == 1), so builds a tuple with a single element from the result of comparing the one-element tuple 1, to the integer 1 for equality They're not equal, so you get the 1-tuple False, for a result.
Tweaking axis labels and names orientation for 3D plots in matplotlib
I am making this 3D plot using matplotlib: ax.plot_surface(x_surf, y_surf, np.reshape(npp, (max_temp/step, max_temp/step)), linewidth=0.2,cmap=palettable.colorbrewer.sequential.Greens_9.mpl_colormap) How can I make the axis label and axis names look more like this plot:
As far as I understood, you want change the "axis label" and "axis names". Unfortunately I could only do part of it (I hope that's something new to you and that someone else finds the second part of it): I did some changes in http://matplotlib.org/examples/mplot3d/pathpatch3d_demo.html in order to obtain the images above import matplotlib.pyplot as plt from matplotlib.patches import Circle, PathPatch # register Axes3D class with matplotlib by importing Axes3D from mpl_toolkits.mplot3d import Axes3D import mpl_toolkits.mplot3d.art3d as art3d from matplotlib.text import TextPath from matplotlib.transforms import Affine2D def text3d(ax, xyz, s, zdir="z", size=None, angle=0, usetex=False, **kwargs): x, y, z = xyz if zdir == "y": xy1, z1 = (x, z), y elif zdir == "y": xy1, z1 = (y, z), x else: xy1, z1 = (x, y), z text_path = TextPath((0, 0), s, size=size, usetex=usetex) trans = Affine2D().rotate(angle).translate(xy1[0], xy1[1]) p1 = PathPatch(trans.transform_path(text_path), **kwargs) ax.add_patch(p1) art3d.pathpatch_2d_to_3d(p1, z=z1, zdir=zdir) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.tick_params(axis='x',direction='out', length=6, width=2, colors='r') text3d(ax, (4, -2, 0), "X-axis", zdir="z", size=.5, usetex=False, ec="none", fc="k") text3d(ax, (12, 4, 0), "Y-axis", zdir="z", size=.5, usetex=False, angle=.5*3.14159, ec="none", fc="k") text3d(ax, (12, 10, 4), "Z-axis", zdir="y", size=.5, usetex=False, angle=.5*3.14159, ec="none", fc="k") ax.set_xlim3d(0, 10) ax.set_ylim3d(0, 10) ax.set_zlim3d(0, 10) plt.show() I was expecting to find how to develop the second part of this problem at http://matplotlib.org/api/axes_api.html, but I did not find it yet. Hope this helps
Why are literal formatted strings so slow in Python 3.6 alpha?
I've downloaded a Python 3.6 alpha build from the Python Github repository, and one of my favourite new features is literal string formatting. It can be used like so: >>> x = 2 >>> f"x is {x}" "x is 2" This appears to do the same thing as using the format function on a str instance. However, one thing that I've noticed is that this literal string formatting is actually very slow compared to just calling format. Here's what timeit says about each method: >>> x = 2 >>> timeit.timeit(lambda: f"X is {x}") 0.8658502227130764 >>> timeit.timeit(lambda: "X is {}".format(x)) 0.5500578542015617 If I use a string as timeit's argument, my results are still showing the pattern: >>> timeit.timeit('x = 2; f"X is {x}"') 0.5786435347381484 >>> timeit.timeit('x = 2; "X is {}".format(x)') 0.4145195760771685 As you can see, using format takes almost half the time. I would expect the literal method to be faster because less syntax is involved. What is going on behind the scenes which causes the literal method to be so much slower?
The f"..." syntax is effectively converted to a str.join() operation on the literal string parts around the {...} expressions, and the results of the expressions themselves passed through the object.__format__() method (passing any :.. format specification in). You can see this when disassembling: >>> import dis >>> dis.dis(compile('f"X is {x}"', '', 'exec')) 1 0 LOAD_CONST 0 ('') 3 LOAD_ATTR 0 (join) 6 LOAD_CONST 1 ('X is ') 9 LOAD_NAME 1 (x) 12 FORMAT_VALUE 0 15 BUILD_LIST 2 18 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 21 POP_TOP 22 LOAD_CONST 2 (None) 25 RETURN_VALUE >>> dis.dis(compile('"X is {}".format(x)', '', 'exec')) 1 0 LOAD_CONST 0 ('X is {}') 3 LOAD_ATTR 0 (format) 6 LOAD_NAME 1 (x) 9 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 12 POP_TOP 13 LOAD_CONST 1 (None) 16 RETURN_VALUE Note the BUILD_LIST and LOAD_ATTR .. (join) op-codes in that result. The new FORMAT_VALUE takes the top of the stack plus a format value (parsed out at compile time) to combine these in a object.__format__() call. So your example, f"X is {x}", is translated to: ''.join(["X is ", x.__format__('')]) Note that this requires Python to create a list object, and call the str.join() method. The str.format() call is also a method call, and after parsing there is still a call to x.__format__('') involved, but crucially, there is no list creation involved here. It is this difference that makes the str.format() method faster. Note that Python 3.6 has only been released as an alpha build; this implementation can still easily change. See PEP 494 – Python 3.6 Release Schedule for the time table, as well as Python issue #27078 (opened in response to this question) for a discussion on how to further improve the performance of formatted string literals.
Gauss-Legendre over intervals -x -> infinity: adaptive algorithm to transform weights and nodes efficiently
Okay I know this has been asked before with a limited example for scaling [-1, 1] intervals [a, b] Different intervals for Gauss-Legendre quadrature in numpy BUT no one has posted how to generalize this for [-a, Infinity] (as is done below, but not (yet) fast). Also this shows how to call a complex function (in quantitative option pricing anyhow) with several implementations. There is the benchmark quad code, followed by leggauss, with links to code examples on how to implement an adaptive algorithm. I have worked through most of the linked adaptive algorithm difficulties - it currently prints the sum of the divided integral to show it works correctly. Here you will find functions to convert a range from [-1, 1] to [0, 1] to [a, Infinity] (thanks @AlexisClarembeau). To use the adaptive algorithm I had to create another function to convert from [-1, 1] to [a, b] which is fed back into the [a, Infinity] function. import numpy as np from scipy.stats import norm, lognorm from scipy.integrate import quad a = 0 degrees = 50 flag=-1.0000 F = 1.2075 K = 0.1251 vol = 0.43 T2 = 0.0411 T1 = 0.0047 def integrand(x, flag, F, K, vol, T2, T1): d1 = (np.log(x / (x+K)) + 0.5 * (vol**2) * (T2-T1)) / (vol * np.sqrt(T2 - T1)) d2 = d1 - vol*np.sqrt(T2 - T1) mu = np.log(F) - 0.5 *vol **2 * T1 sigma = vol * np.sqrt(T1) return lognorm.pdf(x, mu, sigma) * (flag * x*norm.cdf(flag * d1) - flag * (x+K)*norm.cdf(flag * d2)) def transform_integral_0_1_to_Infinity(x, a): return integrand(a+(x/(1-x)), flag, F, K, vol, T2, T1) *(1/(1-x)**2); def transform_integral_negative1_1_to_0_1(x, a): return 0.5 * transform_integral_0_1_to_Infinity((x+1)/2, a) def transform_integral_negative1_1_to_a_b(x, w, a, b): return np.sum(w*(0.5 * transform_integral_0_1_to_Infinity(((x+1)/2*(b-a)+a), a))) def adaptive_integration(x, w, a=-1, b=1, lastsplit=False, precision=1e-10): #split the integral in half assuming [-1, 1] range midpoint = (a+b)/2 interval1 = transform_integral_negative1_1_to_a_b(x, w, a, midpoint) interval2 = transform_integral_negative1_1_to_a_b(x, w, midpoint, b) return interval1+interval2 #just shows this is correct for splitting the interval def integrate(x, w, a): return np.sum(w*transform_integral_negative1_1_to_0_1(x, a)) x, w = np.polynomial.legendre.leggauss(degrees) quadresult = quad(integrand, a, np.Inf, args=(flag, F, K, vol, T2, T1), epsabs=1e-1000)[0] GL = integrate(x, w, a) print("Adaptive Sum Result:") print(adaptive_integration(x, w)) print("GL result"); print(GL) print("QUAD result") print(quadresult) Still need to increase the speed and accuracy with less dimensions as I can't manually adjust the degrees range for -a to get convergence. To illustrate why this is a problem, put in these values instead: a=-20, F=50, then run. You can increase degrees=1000 and see that there is no benefit to this Gauss-Legendre algorithm if it is not applied intelligently. My requirement for speed is to get to 0.0004s per loop, whereas the last algorithm I Cythonized took about 0.75s, which is why I am trying to use a low degree, high accuracy algorithm with Gauss-Legendre. With Cython and multi-threading this requirement from a completely optimized Python implementation is roughly 0.007s per loop (a non-vectorized, loop ridden, inefficient routine could be 0.1s per loop, with degrees=20, i.e. %timeit adaptive_integration(x,w). A possible solution which I've half implemented is here http://online.sfsu.edu/meredith/Numerical_Analysis/improper_integrals on pages 5/6, adaptive integration whereas the interval a-b (in this case, I wrote the transform_integral_negative1_1_to_a_b function) where the interval is divided in 2 (@0.5), the function is then evaluated on these 1/2 intervals, and the sum of the two 0->0.5 + 0.5->1 are compared to the function results for the whole range 0->1. If accuracy is not within tolerance, the range is further subdivided at 0.25 and 0.75, the function is again evaluated for each subinterval, and compared to the prior 1/2 interval sums @0.5. If 1 side is within tolerance (e.g. abs(0->0.5 - (0->0.25 + 0.25->0.5)) < precision), but the other side is not, splitting stops on the side within tolerance, but continues on the other side until precision is reached. At this point the results for each slice of the interval are summed to obtain the full integral with higher accuracy. There are likely faster and better ways of approaching this problem. I don't care as long as it is fast and accurate. Here is the best description of integration routines I've come across for reference http://orion.math.iastate.edu/keinert/computation_notes/chapter5.pdf Award is 100pts bounty + 15pts for answer acceptance. Thank you for assisting in making this code FAST and ACCURATE! EDIT: Here is my change to the adaptive_integration code - if someone can make this work fast I can accept an answer and award bounty. This Mathematica code on page 7 http://online.sfsu.edu/meredith/Numerical_Analysis/improper_integrals does the routine I attempted. It has work on a routine that doesn't converge well, see the variables below. Right now my code errors out: RecursionError: maximum recursion depth exceeded in comparison on some inputs, or if the degrees are set too high, or doesn't get close to the quad result when it does work, so something is apparently wrong here. def adaptive_integration(x, w, a, b, integralA2B, remainingIterations, firstIteration, precision=1e-9): #split the integral in half assuming [-1, 1] range if remainingIterations == 0: print('Adaptive integration failed on the interval',a,'->',b) if np.isnan(integralA2B): return np.nan midpoint = (a+b)/2 interval1 = transform_integral_negative1_1_to_a_b(x, w, a, midpoint) interval2 = transform_integral_negative1_1_to_a_b(x, w, midpoint, b) if np.abs(integralA2B - (interval1 + interval2)) < precision : return(interval1 + interval2) else: return adaptive_integration(x, w, a, midpoint, interval1, (remainingIterations-1), False) + adaptive_integration(x, w, midpoint, b, interval2, (remainingIterations-1), False) #This example doesn't converge to Quad # non-converging interval inputs a = 0 # AND a = -250 degrees = 10 flag= 1 F = 50 K = 0.1251 vol = 0.43 T2 = 0.0411 T1 = 0.0047 print(adaptive_integration(x, w, -1, 1, GL, 500, False)) The output with degrees=100 (after calculating GL with degrees=10000 for a better initial estimate, otherwise, the algorithm always agrees with its own accuracy apparently and doesn't invoke the adaptive path which fails every time): GL result: 60.065205169286379 Adaptive Sum Result: RecursionError: maximum recursion depth exceeded in comparison QUAD result: 68.72069173210338
I think that code does the job: import numpy as np import math deg = 10 x, w = np.polynomial.legendre.leggauss(deg) def function(x): # the function to integrate return math.exp(-x) def function2(x, a): return function(a+x/(1-x))/((1-x)**2); def anotherOne(x, a): return 0.5 * function2(x/2 + 1/2, a) def integrate(deg, a): sum = 0 x, w = np.polynomial.legendre.leggauss(deg) for i in range(deg): print("sum({}) += {} * {} (eval in {})".format(sum, w[i], anotherOne(x[i], a), x[i])) sum += w[i]*anotherOne(x[i], a) return sum; print("result"); print(integrate(10, 1)) It combines your equation to integrate from a to inf and the equation to change the bounds of an integral. I hope it solves your problem (it works for exp(-x) at least) :) If you want an inline computation, the program does the sum of: It's a combination of: And: And:
In Python Dictionaries, how does ( (j*5)+1 ) % 2**i cycle through all 2**i
I am researching how python implements dictionaries. One of the equations in the python dictionary implementation relates the the pseudo random probing for an empty dictionary slot using the equation j = ((j*5) + 1) % 2**i which is explained here. I have read this question, How are Python's Built In Dictionaries Implemented, and basically understand how dictionaries are implemented. What I don't understand is why/how the equation: j = ((j*5) + 1) % 2**i cycles through all the remainders of 2**i. For instance, if i = 3 for a total starting size of 8. j goes through the cycle: 0 1 6 7 4 5 2 3 0 if the starting size is 16, it would go through the cycle: 0 1 6 15 12 13 2 11 8 9 14 7 4 5 10 3 0 This is very useful for probing all the slots in the dictionary. But why does it work ? Why does j = ((j*5)+1) work but not j = ((j*6)+1) or j = ((j*3)+1) both of which get stuck in smaller cycles. I am hoping to get a more intuitive understanding of this than the equation just works and that's why they used it.
This is the same principle that pseudo-random number generators use, as Jasper hinted at, namely linear congruential generators. A linear congruential generator is a sequence that follows the relationship X_(n+1) = (a * X_n + c) mod m. From the wiki page, The period of a general LCG is at most m, and for some choices of factor a much less than that. The LCG will have a full period for all seed values if and only if: m and c are relatively prime. a - 1 is divisible by all prime factors of m. a - 1 is divisible by 4 if m is divisible by 4. It's clear to see that 5 is the smallest a to satisfy these requirements, namely 2^i and 1 are relatively prime. 4 is divisible by 2. 4 is divisible by 4. Also interestingly, 5 is not the only number that satisfies these conditions. 9 will also work. Taking m to be 16, using j=(9*j+1)%16 yields 0 1 10 11 4 5 14 15 8 9 2 3 12 13 6 7 The proof for these three conditions can be found in the original Hull-Dobell paper on page 5, along with a bunch of other PRNG-related theorems that also may be of interest.
Replacing repeated captures
This is sort of a follow-up to Python regex - Replace single quotes and brackets thread. The task: Sample input strings: RSQ(name['BAKD DK'], name['A DKJ']) SMT(name['BAKD DK'], name['A DKJ'], name['S QRT']) Desired outputs: XYZ(BAKD DK, A DKJ) XYZ(BAKD DK, A DKJ, S QRT) The number of name['something']-like items is variable. The current solution: Currently, I'm doing it through two separate re.sub() calls: >>> import re >>> >>> s = "RSQ(name['BAKD DK'], name['A DKJ'])" >>> s1 = re.sub(r"^(\w+)", "XYZ", s) >>> re.sub(r"name\['(.*?)'\]", r"\1", s1) 'XYZ(BAKD DK, A DKJ)' The question: Would it be possible to combine these two re.sub() calls into a single one? In other words, I want to replace something at the beginning of the string and then multiple similar things after, all of that in one go. I've looked into regex module - it's ability to capture repeated patterns looks very promising, tried using re.subf() but failed to make it work.
You can indeed use the regex module and repeated captures. The main interest is that you can check the structure of the matched string: import regex regO = regex.compile(r''' \w+ \( (?: name\['([^']*)'] (?: ,[ ] | (?=\)) ) )* \) ''', regex.VERBOSE); regO.sub(lambda m: 'XYZ(' + (', '.join(m.captures(1))) + ')', s) (Note that you can replace "name" by \w+ or anything you want without problems.)
'super' object has no attribute '__eq__'
When I try to override the magic method __eq__, and use super to access the base method found in object, I get an error. There's no way this is a bug, but it sure feels like one: class A(object): def __eq__(self, other): return super(A, self).__eq__(other) A() == 0 # raises AttributeError: 'super' object has no attribute '__eq__' This is unintuitive because object.__eq__ exists, but for class A(object): pass it doesn't. If I'm not mistaken __eq__ resorts to an is check, so that may be the workaround here, but using is instead of super isn't mixin friendly. Going that route is ok in my case, but in others it might not be. Any suggestions, or info on why __eq__ works this way would be great.
As noted in Will's answer, object() does not actually implement __eq__ at all for instances (in python 2.7). You are being deceived by the fact that object.__eq__ exists into believing it must be a method that checks if instances of object are equal Instead, object.__eq__ is actually a class method, inherited from type, that is used to check if types are equal. That is, to handle expressions such as object == int and object == object.
How can I create a type hint that my returned list contains strings?
I want to use Type Hints in my Python program. How can I create Type Hints for complex data structures like lists with strings a generator returning integers? Example def names() -> list: # I would like to specify that the list contains strings? return ['Amelie', 'John', 'Carmen'] def numbers(): # Which type should I specify for `numbers()`? for num in range(100): yield num
Use the typing module; it contains generics, type objects you can use to specify containers with constraints on their contents: import typing def names() -> typing.List[str]: # list object with strings return ['Amelie', 'John', 'Carmen'] def numbers() -> typing.Iterator[int]: # iterator yielding integers for num in range(100): yield num Depending on how you design your code and how you want to use the return value of names(), you could also use the types.Sequence and types.MutableSequence types here, depending on wether or not you expect to be able to mutate the result. A generator is a specific type of iterator, so typing.Iterator is appropriate here. If your generator also accepts send() values and uses return to set a StopIteration value, you can use the typing.Generator object too: def filtered_numbers(filter) -> typing.Generator[int, int, float]: # contrived generator that filters numbers; returns percentage filtered. # first send a limit! matched = 0 limit = yield yield # one more yield to pause after sending for num in range(limit): if filter(num): yield num matched += 1 return (matched / limit) * 100 If you are new to type hinting, then PEP 483 – The Theory of Type Hints may be helpful.
Spurious newlines added in Django management commands
Running Django v1.10 on Python 3.5.0: from django.core.management.base import BaseCommand class Command(BaseCommand): def handle(self, *args, **options): print('hello ', end='', file=self.stdout) print('world', file=self.stdout) Expected output: hello world Actual output: hello world How do I correctly pass the ending character? I currently use a workaround of setting explicitly: self.stdout.ending = '' But this hack means you don't get all the features of the print function, you must use self.stdout.write and prepare the bytes manually.
As is mentioned in Django 1.10's Custom Management Commands document: When you are using management commands and wish to provide console output, you should write to self.stdout and self.stderr, instead of printing to stdout and stderr directly. By using these proxies, it becomes much easier to test your custom command. Note also that you don’t need to end messages with a newline character, it will be added automatically, unless you specify the ending parameter: self.stdout.write("Unterminated line", ending='') Hence, in order to print in your Command class, you should define your handle() function as: from django.core.management.base import BaseCommand class Command(BaseCommand): def handle(self, *args, **options): self.stdout.write("hello ", ending='') self.stdout.write("world", ending='') # prints: hello world Also, by explicitly setting self.stdout.ending = '', you are modifying the property of self.stdout object. But you may not want this to be reflected in future calls of self.stdout.write(). Hence it will be better to use ending parameter within self.stdout.write() function (as demonstrated in sample code above). As you mentioned "But this hack means you don't get all the features of the print function, you must use self.stdout.write and prepare the bytes manually." No, you do not have to worry about creating the bytes or other features of print(), as self.stdout.write() function belonging to OutputWrapper class expects data to be in str format. Also I would like to mention that print() and OutputWrapper.write() behaves quite similar both acting as a wrapper around sys.stdout.write(). The only difference between print() and OutputWrapper.write() is: print() accepts message string as *args with separator parameter to join the the multiple strings, whereas OutputWrapper.write() accepts single message string But this difference could be easily handled by explicitly joining the strings with separator and passing it to OutputWrapper.write(). Conclusion: You do not have to worry about the additional features provided by print() as there are none, and should go ahead with using self.stdout.write() as suggested in this answer's quoted content from Custom Management Commands document. If you are interested, you may check the source code of BaseCommand and OutputWrapper classes available at: Source code for django.core.management.base. It might help in clearing some of your doubts. You may also check PEP-3105 related to Python 3's print().
Why is dict definition faster in Python 2.7 than in Python 3.x?
I have encountered a (not very unusual) situation in which I had to either use a map() or a list comprehension expression. And then I wondered which one is faster. This StackOverflow answer provided me the solution, but then I started to test it myself. Basically the results were the same, but I found an unexpected behavior when switching to Python 3 that I got curious about, and namely: λ iulian-pc ~ → python --version Python 2.7.6 λ iulian-pc ~ → python3 --version Python 3.4.3 λ iulian-pc ~ → python -mtimeit '{}' 10000000 loops, best of 3: 0.0306 usec per loop λ iulian-pc ~ → python3 -mtimeit '{}' 10000000 loops, best of 3: 0.105 usec per loop λ iulian-pc ~ → python -mtimeit 'dict()' 10000000 loops, best of 3: 0.103 usec per loop λ iulian-pc ~ → python3 -mtimeit 'dict()' 10000000 loops, best of 3: 0.165 usec per loop I had the assumption that Python 3 is faster than Python 2, but it turned out in several posts (1, 2) that it's not the case. Then I thought that maybe Python 3.5 will perform better at such a simple task, as they state in their README: The language is mostly the same, but many details, especially how built-in objects like dictionaries and strings work, have changed considerably, and a lot of deprecated features have finally been removed. But nope, it performed even worse: λ iulian-pc ~ → python3 --version Python 3.5.0 λ iulian-pc ~ → python3 -mtimeit '{}' 10000000 loops, best of 3: 0.144 usec per loop λ iulian-pc ~ → python3 -mtimeit 'dict()' 1000000 loops, best of 3: 0.217 usec per loop I've tried to dive into the Python 3.5 source code for dict, but my knowledge of C language is not sufficient to find the answer myself (or, maybe I even don't search in the right place). So, my question is: What makes the newer version of Python slower comparing to an older version of Python on a relatively simple task such as a dict definition, as by the common sense it should be vice-versa? I'm aware of the fact that these differences are so small that in most cases they can be neglected. It was just an observation that made me curious about why the time increased and not remained the same at least?
Because nobody cares The differences you are citing are on the order of tens or hundreds of nanoseconds. A slight difference in how the C compiler optimizes register use could easily cause such changes (as could any number of other C-level optimization differences). That, in turn, could be caused by any number of things, such as changes in the number and usage of local variables in the C implementation of Python (CPython), or even just switching C compilers. The fact is, nobody is actively optimizing for these small differences, so nobody is going to be able to give you a specific answer. CPython is not designed to be fast in an absolute sense. It is designed to be scalable. So, for example, you can shove hundreds or thousands of items into a dictionary and it will continue to perform well. But the absolute speed of creating a dictionary simply isn't a primary concern of the Python implementors, at least when the differences are this small.
What is the advantage of using a lambda:None function?
I saw the following code: eris = lambda:None eris.jkcpp = np.einsum('iipq->ipq', eriaa[:ncore[0],:ncore[0],:,:]) eris.jc_PP = np.einsum('iipq->pq', eriab[:ncore[0],:ncore[0],:,:]) Can we define arbitrary attributes for a function defined by lambda:None?
This looks like a trick to create a simple object to hold values in one line. Most built-in objects don't allow you to set arbitrary attributes on them: >>> object().x = 0 Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: 'object' object has no attribute 'x' >>> ''.x = 0 Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: 'str' object has no attribute 'x' >>> [].x = 0 Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: 'list' object has no attribute 'x' If you make your own class, then you can add whatever attributes you want. In this case you could make a class whose __init__ method assigns the attributes, but this may not be worth the boilerplate. So you can just make an empty class: >>> class Data(object): pass >>> d = Data() >>> d.x = 0 >>> d.x 0 Apparently the programmer is either not aware of this or doesn't want that extra line where the class is declared and has come up with their own workaround for storing data. It turns out functions, despite being a built-in type do allow you to add attributes to them: >>> def foo(): pass >>> foo.x = 0 >>> foo.x 0 Both the above and a lambda let you create such a container in a single statement. I actually think that's a neat idea.
Why is statistics.mean() so slow?
I compared the performance of the mean function of the statistics module with the simple sum(l)/len(l) method and found the mean function to be very slow for some reason. I used timeit with the two code snippets below to compare them, does anyone know what causes the massive difference in execution speed? I'm using Python 3.5. from timeit import repeat print(min(repeat('mean(l)', '''from random import randint; from statistics import mean; \ l=[randint(0, 10000) for i in range(10000)]''', repeat=20, number=10))) The code above executes in about 0.043 seconds on my machine. from timeit import repeat print(min(repeat('sum(l)/len(l)', '''from random import randint; from statistics import mean; \ l=[randint(0, 10000) for i in range(10000)]''', repeat=20, number=10))) The code above executes in about 0.000565 seconds on my machine.
Python's statistics module is not built for speed, but for precision In the specs for this module, it appears that The built-in sum can lose accuracy when dealing with floats of wildly differing magnitude. Consequently, the above naive mean fails this "torture test" assert mean([1e30, 1, 3, -1e30]) == 1 returning 0 instead of 1, a purely computational error of 100%. Using math.fsum inside mean will make it more accurate with float data, but it also has the side-effect of converting any arguments to float even when unnecessary. E.g. we should expect the mean of a list of Fractions to be a Fraction, not a float. Conversely, if we take a look at the implementation of _sum() in this module, the first lines of the method's docstring seem to confirm that: def _sum(data, start=0): """_sum(data [, start]) -> (type, sum, count) Return a high-precision sum of the given numeric data as a fraction, together with the type to be converted to and the count of items. [...] """ So yeah, statistics implementation of sum, instead of being a simple one-liner call to Python's built-in sum() function, takes about 20 lines by itself with a nested for loop in its body. This happens because statistics._sum chooses to guarantee the maximum precision for all types of number it could encounter (even if they widely differ from one another), instead of simply emphasizing speed. Hence, it appears normal that the built-in sum proves a hundred times faster. The cost of it being a much lower precision in you happen to call it with exotic numbers. Other options If you need to prioritize speed in your algorithms, you should have a look at Numpy or Pandas instead (built on top of Numpy), the algorithms of which being implemented in C. These modules offer both high-speed and high-precision computations.
addCleanUp vs tearDown
Recently, Ned Batchelder during his talk at PyCon 2016 noted: If you are using unittest to write your tests, definitely use addCleanup, it's much better than tearDown. Up until now, I've never used addCleanUp() and got used to setUp()/tearDown() pair of methods for test "set up" and "tear down" phases. Why should I switch to addCleanUp() instead of tearDown()? It was also recently discussed in the Python unittest with Robert Collins podcast.
Per the addCleanup doc string: Cleanup items are called even if setUp fails (unlike tearDown) addCleanup can be used to register multiple functions, so you could use separate functions for each resource you wish to clean up. That would allow your code to be a bit more reusable/modular.
Generating random vectors of Euclidean norm <= 1 in Python?
More specifically, given a natural number d, how can I generate random vectors in R^d such that each vector x has Euclidean norm <= 1? Generating random vectors via numpy.random.rand(1,d) is no problem, but the likelihood of such a random vector having norm <= 1 is predictably bad for even not-small d. For example, even for d = 10 about 0.2% percent of such random vectors have appropriately small norm. So that seems like a silly solution. EDIT: Re: Walter's comment, yes, I'm looking for a uniform distribution over vectors in the unit ball in R^d.
Based on the Wolfram Mathworld article on hypersphere point picking and Nate Eldredge's answer to a similar question on math.stackexchange.com, you can generate such a vector by generating a vector of d independent Gaussian random variables and a random number U uniformly distributed over the closed interval [0, 1], then normalizing the vector to norm U^(1/d).
When is hash(n) == n in Python?
I've been playing with Python's hash function. For small integers, it appears hash(n) == n always. However this does not extend to large numbers: >>> hash(2**100) == 2**100 False I'm not surprised, I understand hash takes a finite range of values. What is that range? I tried using binary search to find the smallest number hash(n) != n >>> import codejamhelpers # pip install codejamhelpers >>> help(codejamhelpers.binary_search) Help on function binary_search in module codejamhelpers.binary_search: binary_search(f, t) Given an increasing function :math:`f`, find the greatest non-negative integer :math:`n` such that :math:`f(n) \le t`. If :math:`f(n) > t` for all :math:`n \ge 0`, return None. >>> f = lambda n: int(hash(n) != n) >>> n = codejamhelpers.binary_search(f, 0) >>> hash(n) 2305843009213693950 >>> hash(n+1) 0 What's special about 2305843009213693951? I note it's less than sys.maxsize == 9223372036854775807 Edit: I'm using Python 3. I ran the same binary search on Python 2 and got a different result 2147483648, which I note is sys.maxint+1 I also played with [hash(random.random()) for i in range(10**6)] to estimate the range of hash function. The max is consistently below n above. Comparing the min, it seems Python 3's hash is always positively valued, whereas Python 2's hash can take negative values.
2305843009213693951 is 2^61 - 1. It's the largest Mersenne prime that fits into 64 bits. If you have to make a hash just by taking the value mod some number, then a large Mersenne prime is a good choice -- it's easy to compute and ensures an even distribution of possibilities. (Although I personally would never make a hash this way) It's especially convenient to compute the modulus for floating point numbers. They have an exponential component that multiplies the whole number by 2^x. Since 2^61 = 1 mod 2^61-1, you only need to consider the (exponent) mod 61. See: https://en.wikipedia.org/wiki/Mersenne_prime
How can I make sense of the `else` statement in Python loops?
Many Python programmers are probably unaware that the syntax of while loops and for loops includes an optional else: clause: for val in iterable: do_something(val) else: clean_up() The body of the else clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with return or break skips the else clause; exiting after a continue executes it. I know this only because I just looked it up (yet again), because I can never remember when the else clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with return? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find else incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but how can I think about the else keyword so that its semantics make sense, and I can therefore remember it? I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the try statement's else: clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing else will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to this question, which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of else with the keyword choice), but I feel there should be a link to this question somewhere.
(This is inspired by @Mark Tolonen's answer.) An if statement runs its else clause if its condition evaluates to false. Identically, a while loop runs the else clause if its condition evaluates to false. This rule matches the behavior you described: In normal execution, the while loop repeatedly runs until the condition evaluates to false, and therefore naturally exiting the loop runs the else clause. When you execute a break statement, you exit out of the loop without evaluating the condition, so the condition cannot evaluate to false and you never run the else clause. When you execute a continue statement, you evaluate the condition again, and do exactly what you normally would at the beginning of a loop iteration. So, if the condition is true, you keep looping, but if it is false you run the else clause. Other methods of exiting the loop, such as return, do not evaluate the condition and therefore do not run the else clause. for loops behave the same way. Just consider the condition as true if the iterator has more elements, or false otherwise.
How to classify blurry numbers with openCV
I would like to capture the number from this kind of picture. I tried multi-scale matching from the following link. http://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/ All I want to know is the red number. But the problem is, the red number is blurry for openCV recognize/match template. Would there be other possible way to detect this red number on the black background?
Classifying Digits You clarified in comments that you've already isolated the number part of the image pre-detection, so I'll start under that assumption. Perhaps you can approximate the perspective effects and "blurriness" of the number by treating it as a hand-written number. In this case, there is a famous data-set of handwritten numerals for classification training called mnist. Yann LeCun has enumerated the state of the art on this dataset here mnist hand-written dataset. At the far end of the spectrum, convolutional neural networks yield outrageously low error rates (fractions of 1% error). For a simpler solution, k-nearest neighbours using deskewing, noise removal, blurring, and 2 pixel shift, yielded about 1% error, and is significantly faster to implement. Python opencv has an implementation. Neural networks and support vector machines with deskewing also have some pretty impressive performance rates. Note that convolutional networks don't have you pick your own features, so the important color-differential information here might just be used for narrowing the region-of-interest. Other approaches, where you define your feature space, might incorporate the known color difference more precisely. Python supports a lot of machine learning techniques in the terrific package sklearn - here are examples of sklearn applied to mnist. If you're looking for an tutorialized explanation of machine learning in python, sklearn's own tutorial is very verbose From the sklearn link: Those are the kinds of items you're trying to classify if you learn using this approach. To emphasize how easy it is to start training some of these machine learning-based classifiers, here is an abridged section from the example code in the linked sklearn package: digits = datasets.load_digits() # built-in to sklearn! data = digits.images.reshape((len(digits.images), -1)) # Create a classifier: a support vector classifier classifier = svm.SVC(gamma=0.001) # We learn the digits on the first half of the digits classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2]) If you're wedded to openCv (possibly because you want to port to a real-time system in the future), opencv3/python has a tutorial on this exact topic too! Their demo uses k-nearest-neighbor (listed in the LeCun page), but they also have svms and the many of the other tools in sklearn. Their ocr page using SVMs uses deskewing, which might be useful with the perspective effect in your problem: UPDATE: I used the out-of-the box skimage approach outlined above on your image, heavily cropped, and it correctly classified it. A lot more testing would be required to see if this is rhobust in practice ^^ That tiny image is the 8x8 crop of the image you embedded in your question. mnist is 8x8 images. That's why it trains in less than a second with default arguments in skimage. I converted it the correct format by scaling it up to the mnist range using number = scipy.misc.imread("cropped_image.png") datum = (number[:,:,0]*15).astype(int).reshape((64,)) classifier.predict(datum) # returns 8 I didn't change anything else from the example; here, I'm only using the first channel for classification, and no smart feature computation. 15 looked about right to me; you'll need to tune it to get within the target range or (ideally) provide your own training and testing set Object Detection If you haven't isolated the number in the image you'll need an object detector. The literature space on this problem is gigantic and I won't start down that rabbit hole (google Viola and Jones, maybe?) This blog covers the fundamentals of a "sliding window" detector in python. Adrian Rosebrock looks like he's even a contributor on SO, and that page has some good examples of opencv and python-based object detectors fairly tutorialized (you actually linked to that blog in your question, I didn't realize). In short, classify windows across the image and pick the window of highest confidence. Narrowing down the search space with a region of interest will of course yield huge improvements in all areas of performance
Fast Numpy Loops
How do you optimize this code (without vectorizing, as this leads up to using the semantics of the calculation, which is quite often far from being non-trivial): slow_lib.py: import numpy as np def foo(): size = 200 np.random.seed(1000031212) bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += np.outer(val, val) The point is that such kind loops correspond quite often to operations where you have double sums over some vector operation. This is quite slow: >>t = timeit.timeit('foo()', 'from slow_lib import foo', number = 10) >>print ("took: "+str(t)) took: 41.165681839 Ok, so then let's cynothize it and add type annotations likes there is no tomorrow: c_slow_lib.pyx: import numpy as np cimport numpy as np import cython @cython.boundscheck(False) @cython.wraparound(False) def foo(): cdef int size = 200 cdef int i,j np.random.seed(1000031212) cdef np.ndarray[np.double_t, ndim=2] bar = np.random.rand(size, size) cdef np.ndarray[np.double_t, ndim=2] moo = np.zeros((size,size), dtype = np.float) cdef np.ndarray[np.double_t, ndim=1] val for i in xrange(0,size): for j in xrange(0,size): val = bar[j] moo += np.outer(val, val) >>t = timeit.timeit('foo()', 'from c_slow_lib import foo', number = 10) >>print ("took: "+str(t)) took: 42.3104710579 ... ehr... what? Numba to the rescue! numba_slow_lib.py: import numpy as np from numba import jit size = 200 np.random.seed(1000031212) bar = np.random.rand(size, size) @jit def foo(): bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += np.outer(val, val) >>t = timeit.timeit('foo()', 'from numba_slow_lib import foo', number = 10) >>print("took: "+str(t)) took: 40.7327859402 So is there really no way to speed this up? The point is: if I convert the inner loop into a vectorized version (building a larger matrix representing the inner loop and then calling np.outer on the larger matrix) I get much faster code. if I implement something similar in Matlab (R2016a) this performs quite well due to JIT.
Here's the code for outer: def outer(a, b, out=None): a = asarray(a) b = asarray(b) return multiply(a.ravel()[:, newaxis], b.ravel()[newaxis,:], out) So each call to outer involves a number of python calls. Those eventually call compiled code to perform the multiplication. But each incurs an overhead that has nothing to do with the size of your arrays. So 200 (200**2?) calls to outer will have all that overhead, whereas one call to outer with all 200 rows has one overhead set, followed by one fast compiled operation. cython and numba don't compile or otherwise bypass the Python code in outer. All they can do is streamline the iteration code that you wrote - and that isn't consuming much time. Without getting into details, the MATLAB jit must be able to replace the 'outer' with faster code - it rewrites the iteration. But my experience with MATLAB dates from a time before its jit. For real speed improvements with cython and numba you need to use primitive numpy/python code all the way down. Or better yet focus your effort on slow inner pieces. Replacing your outer with a streamlined version cuts run time about in half: def foo1(N): size = N np.random.seed(1000031212) bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += val[:,None]*val return moo With the full N=200 your function took 17s per loop. If I replace the inner two lines with pass (no calculation), time drops to 3ms per loop. In other words, the outer loop mechanism is not a big time consumer, at least not compared to many calls to outer().
Variable step in a for loop
I am trying to loop between 0.01 and 10, but between 0.01 and 0.1 use 0.01 as the step, then between 0.1 and 1.0 use 0.1 as step, and between 1.0 and 10.0 use 1.0 as step. I have the while loop code written, but want to make it more pythonic. i = 0.01 while i < 10: # do something print i if i < 0.1: i += 0.01 elif i < 1.0: i += 0.1 else: i += 1 This will produce 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2, 3, 4, 5, 6, 7, 8, 9
A special-purse generator function might be the right way to go. This would effectively separate the boring part (getting the list of numbers right) from the interesting part (the # do something in your example). def my_range(): for j in .01, .1, 1.: for i in range(1, 10, 1): yield i * j for x in my_range(): print x
How can I perform two-dimensional interpolation using scipy?
This Q&A is intended as a canonical(-ish) concerning two-dimensional (and multi-dimensional) interpolation using scipy. There are often questions concerning the basic syntax of various multidimensional interpolation methods, I hope to set these straight too. I have a set of scattered two-dimensional data points, and I would like to plot them as a nice surface, preferably using something like contourf or plot_surface in matplotlib.pyplot. How can I interpolate my two-dimensional or multidimensional data to a mesh using scipy? I've found the scipy.interpolate sub-package, but I keep getting errors when using interp2d or bisplrep or griddata or rbf. What is the proper syntax of these methods?
Disclaimer: I'm mostly writing this post with syntactical considerations and general behaviour in mind. I'm not familiar with the memory and CPU aspect of the methods described, and I aim this answer at those who have reasonably small sets of data, such that the quality of the interpolation can be the main aspect to consider. I am aware that when working with very large data sets, the better-performing methods (namely griddata and Rbf) might not be feasible. I'm going to compare three kinds of multi-dimensional interpolation methods (interp2d/splines, griddata and Rbf). I will subject them to two kinds of interpolation tasks and two kinds of underlying functions (points from which are to be interpolated). The specific examples will demonstrate two-dimensional interpolation, but the viable methods are applicable in arbitrary dimensions. Each method provides various kinds of interpolation; in all cases I will use cubic interpolation (or something close1). It's important to note that whenever you use interpolation you introduce bias compared to your raw data, and the specific methods used affect the artifacts that you will end up with. Always be aware of this, and interpolate responsibly. The two interpolation tasks will be upsampling (input data is on a rectangular grid, output data is on a denser grid) interpolation of scattered data onto a regular grid The two functions (over the domain [x,y] in [-1,1]x[-1,1]) will be a smooth and friendly function: cos(pi*x)*sin(pi*y); range in [-1, 1] an evil (and in particular, non-continuous) function: x*y/(x^2+y^2) with a value of 0.5 near the origin; range in [-0.5, 0.5] Here's how they look: I will first demonstrate how the three methods behave under these four tests, then I'll detail the syntax of all three. If you know what you should expect from a method, you might not want to waste your time learning its syntax (looking at you, interp2d). Test data For the sake of explicitness, here is the code with which I generated the input data. While in this specific case I'm obviously aware of the function underlying the data, I will only use this to generate input for the interpolation methods. I use numpy for convenience (and mostly for generating the data), but scipy alone would suffice too. import numpy as np import scipy.interpolate as interp # auxiliary function for mesh generation def gimme_mesh(n): minval = -1 maxval = 1 # produce an asymmetric shape in order to catch issues with transpositions return np.meshgrid(np.linspace(minval,maxval,n), np.linspace(minval,maxval,n+1)) # set up underlying test functions, vectorized def fun_smooth(x, y): return np.cos(np.pi*x)*np.sin(np.pi*y) def fun_evil(x, y): # watch out for singular origin; function has no unique limit there return np.where(x**2+y**2>1e-10, x*y/(x**2+y**2), 0.5) # sparse input mesh, 6x7 in shape N_sparse = 6 x_sparse,y_sparse = gimme_mesh(N_sparse) z_sparse_smooth = fun_smooth(x_sparse, y_sparse) z_sparse_evil = fun_evil(x_sparse, y_sparse) # scattered input points, 10^2 altogether (shape (100,)) N_scattered = 10 x_scattered,y_scattered = np.random.rand(2,N_scattered**2)*2 - 1 z_scattered_smooth = fun_smooth(x_scattered, y_scattered) z_scattered_evil = fun_evil(x_scattered, y_scattered) # dense output mesh, 20x21 in shape N_dense = 20 x_dense,y_dense = gimme_mesh(N_dense) Smooth function and upsampling Let's start with the easiest task. Here's how an upsampling from a mesh of shape [6,7] to one of [20,21] works out for the smooth test function: Even though this is a simple task, there are already subtle differences between the outputs. At a first glance all three outputs are reasonable. There are two features to note, based on our prior knowledge of the underlying function: the middle case of griddata distorts the data most. Note the y==-1 boundary of the plot (nearest the x label): the function should be strictly zero (since y==-1 is a nodal line for the smooth function), yet this is not the case for griddata. Also note the x==-1 boundary of the plots (behind, to the left): the underlying function has a local maximum (implying zero gradient near the boundary) at [-1, -0.5], yet the griddata output shows clearly non-zero gradient in this region. The effect is subtle, but it's a bias none the less. (The fidelity of Rbf is even better with the default choice of radial functions, dubbed multiquadratic.) Evil function and upsampling A bit harder task is to perform upsampling on our evil function: Clear differences are starting to show among the three methods. Looking at the surface plots, there are clear spurious extrema appearing in the output from interp2d (note the two humps on the right side of the plotted surface). While griddata and Rbf seem to produce similar results at first glance, the latter seems to produce a deeper minimum near [0.4, -0.4] that is absent from the underlying function. However, there is one crucial aspect in which Rbf is far superior: it respects the symmetry of the underlying function (which is of course also made possible by the symmetry of the sample mesh). The output from griddata breaks the symmetry of the sample points, which is already weakly visible in the smooth case. Smooth function and scattered data Most often one wants to perform interpolation on scattered data. For this reason I expect these tests to be more important. As shown above, the sample points were chosen pseudo-uniformly in the domain of interest. In realistic scenarios you might have additional noise with each measurement, and you should consider whether it makes sense to interpolate your raw data to begin with. Output for the smooth function: Now there's already a bit of a horror show going on. I clipped the output from interp2d to between [-1, 1] exclusively for plotting, in order to preserve at least a minimal amount of information. It's clear that while some of the underlying shape is present, there are huge noisy regions where the method completely breaks down. The second case of griddata reproduces the shape fairly nicely, but note the white regions at the border of the contour plot. This is due to the fact that griddata only works inside the convex hull of the input data points (in other words, it doesn't perform any extrapolation). I kept the default NaN value for output points lying outside the convex hull.2 Considering these features, Rbf seems to perform best. Evil function and scattered data And the moment we've all been waiting for: It's no huge surprise that interp2d gives up. In fact, during the call to interp2d you should expect some friendly RuntimeWarnings complaining about the impossibility of the spline to be constructed. As for the other two methods, Rbf seems to produce the best output, even near the borders of the domain where the result is extrapolated. So let me say a few words about the three methods, in decreasing order of preference (so that the worst is the least likely to be read by anybody). scipy.interpolate.Rbf The Rbf class stands for "radial basis functions". To be honest I've never considered this approach until I started researching for this post, but I'm pretty sure I'll be using these in the future. Just like the spline-based methods (see later), usage comes in two steps: first one creates a callable Rbf class instance based on the input data, and then calls this object for a given output mesh to obtain the interpolated result. Example from the smooth upsampling test: import scipy.interpolate as interp zfun_smooth_rbf = interp.Rbf(x_sparse, y_sparse, z_sparse_smooth, function='cubic', smooth=0) # default smooth=0 for interpolation z_dense_smooth_rbf = zfun_smooth_rbf(x_dense, y_dense) # not really a function, but a callable class instance Note that both input and output points were 2d arrays in this case, and the output z_dense_smooth_rbf has the same shape as x_dense and y_dense without any effort. Also note that Rbf supports arbitrary dimensions for interpolation. So, scipy.interpolate.Rbf produces well-behaved output even for crazy input data supports interpolation in higher dimensions extrapolates outside the convex hull of the input points (of course extrapolation is always a gamble, and you should generally not rely on it at all) creates an interpolator as a first step, so evaluating it in various output points is less additional effort can have output points of arbitrary shape (as opposed to being constrained to rectangular meshes, see later) prone to preserving the symmetry of the input data supports multiple kinds of radial functions for keyword function: multiquadric, inverse, gaussian, linear, cubic, quintic, thin_plate and user-defined arbitrary scipy.interpolate.griddata My former favourite, griddata, is a general workhorse for interpolation in arbitrary dimensions. It doesn't perform extrapolation beyond setting a single preset value for points outside the convex hull of the nodal points, but since extrapolation is a very fickle and dangerous thing, this is not necessarily a con. Usage example: z_dense_smooth_griddata = interp.griddata(np.array([x_sparse.ravel(),y_sparse.ravel()]).T, z_sparse_smooth.ravel(), (x_dense,y_dense), method='cubic') # default method is linear Note the slightly kludgy syntax. The input points have to be specified in an array of shape [N, D] in D dimensions. For this we first have to flatten our 2d coordinate arrays (using ravel), then concatenate the arrays and transpose the result. There are multiple ways to do this, but all of them seem to be bulky. The input z data also have to be flattened. We have a bit more freedom when it comes to the output points: for some reason these can also be specified as a tuple of multidimensional arrays. Note that the help of griddata is misleading, as it suggests that the same is true for the input points (at least for version 0.17.0): griddata(points, values, xi, method='linear', fill_value=nan, rescale=False) Interpolate unstructured D-dimensional data. Parameters ---------- points : ndarray of floats, shape (n, D) Data point coordinates. Can either be an array of shape (n, D), or a tuple of `ndim` arrays. values : ndarray of float or complex, shape (n,) Data values. xi : ndarray of float, shape (M, D) Points at which to interpolate data. In a nutshell, scipy.interpolate.griddata produces well-behaved output even for crazy input data supports interpolation in higher dimensions does not perform extrapolation, a single value can be set for the output outside the convex hull of the input points (see fill_value) computes the interpolated values in a single call, so probing multiple sets of output points starts from scratch can have output points of arbitrary shape supports nearest-neighbour and linear interpolation in arbitrary dimensions, cubic in 1d and 2d. Nearest-neighbour and linear interpolation use NearestNDInterpolator and LinearNDInterpolator under the hood, respectively. 1d cubic interpolation uses a spline, 2d cubic interpolation uses CloughTocher2DInterpolator to construct a continuously differentiable piecewise-cubic interpolator. might violate the symmetry of the input data scipy.interpolate.interp2d/scipy.interpolate.bisplrep The only reason I'm discussing interp2d and its relatives is that it has a deceptive name, and people are likely to try using it. Spoiler alert: don't use it (as of scipy version 0.17.0). It's already more special than the previous subjects in that it's specifically used for two-dimensional interpolation, but I suspect this is by far the most common case for multivariate interpolation. As far as syntax goes, interp2d is similar to Rbf in that it first needs constructing an interpolation instance, which can be called to provide the actual interpolated values. There's a catch, however: the output points have to be located on a rectangular mesh, so inputs going into the call to the interpolator have to be 1d vectors which span the output grid, as if from numpy.meshgrid: # reminder: x_sparse and y_sparse are of shape [6, 7] from numpy.meshgrid zfun_smooth_interp2d = interp.interp2d(x_sparse, y_sparse, z_sparse_smooth, kind='cubic') # default kind is 'linear' # reminder: x_dense and y_dense are of shape [20, 21] from numpy.meshgrid xvec = x_dense[0,:] # 1d array of unique x values, 20 elements yvec = y_dense[:,0] # 1d array of unique y values, 21 elements z_dense_smooth_interp2d = zfun_smooth_interp2d(xvec,yvec) # output is [20, 21]-shaped array One of the most common mistakes when using interp2d is putting your full 2d meshes into the interpolation call, which leads to explosive memory consumption, and hopefully to a hasty MemoryError. Now, the greatest problem with interp2d is that it often doesn't work. In order to understand this, we have to look under the hood. It turns out that interp2d is a wrapper for the lower-level functions bisplrep+bisplev, which are in turn wrappers for FITPACK routines (written in Fortran). The equivalent call to the previous example would be kind = 'cubic' if kind=='linear': kx=ky=1 elif kind=='cubic': kx=ky=3 elif kind=='quintic': kx=ky=5 # bisplrep constructs a spline representation, bisplev evaluates the spline at given points bisp_smooth = interp.bisplrep(x_sparse.ravel(),y_sparse.ravel(),z_sparse_smooth.ravel(),kx=kx,ky=ky,s=0) z_dense_smooth_bisplrep = interp.bisplev(xvec,yvec,bisp_smooth).T # note the transpose Now, here's the thing about interp2d: (in scipy version 0.17.0) there is a nice comment in interpolate/interpolate.py for interp2d: if not rectangular_grid: # TODO: surfit is really not meant for interpolation! self.tck = fitpack.bisplrep(x, y, z, kx=kx, ky=ky, s=0.0) and indeed in interpolate/fitpack.py, in bisplrep there's some setup and ultimately tx, ty, c, o = _fitpack._surfit(x, y, z, w, xb, xe, yb, ye, kx, ky, task, s, eps, tx, ty, nxest, nyest, wrk, lwrk1, lwrk2) And that's it. The routines underlying interp2d are not really meant to perform interpolation. They might suffice for sufficiently well-behaved data, but under realistic circumstances you will probably want to use something else. Just to conclude, interpolate.interp2d can lead to artifacts even with well-tempered data is specifically for bivariate problems (although there's the limited interpn for input points defined on a grid) performs extrapolation creates an interpolator as a first step, so evaluating it in various output points is less additional effort can only produce output over a rectangular grid, for scattered output you would have to call the interpolator in a loop supports linear, cubic and quintic interpolation might violate the symmetry of the input data 1I'm fairly certain that the cubic and linear kind of basis functions of Rbf do not exactly correspond to the other interpolators of the same name. 2These NaNs are also the reason for why the surface plot seems so odd: matplotlib historically has difficulties with plotting complex 3d objects with proper depth information. The NaN values in the data confuse the renderer, so parts of the surface that should be in the back are plotted to be in the front. This is an issue with visualization, and not interpolation.
How to round a number to a chosen integer
In Denmark we have an odd grading system that goes as follows. [-3,00,02,4,7,10,12] Our assignment is to take a vector with different decimal numbers, and round it to the nearest valid grade. Here is our code so far. import numpy as np def roundGrade(grades): if (-5<grades<-1.5): gradesRounded = -3 elif (-1.5<=grades<1.5): gradesRounded = 00 elif (1.5<=grades<3): gradesRounded = 2 elif (3<=grades<5.5): gradesRounded = 4 elif (5.5<=grades<8.5): gradesRounded = 7 elif (8.5<=grades<11): gradesRounded = 10 elif (11<=grades<15): gradesRounded = 12 return gradesRounded print(roundGrade(np.array[-2.1,6.3,8.9,9])) Our console doesn't seem to like this and retuns: TypeError: builtin_function_or_method' object is not subscriptable All help is appreciated, and if you have a smarter method you are welcome to put us in our place.
You are getting that error because when you print, you are using incorrect syntax: print(roundGrade(np.array[-2.1,6.3,8.9,9])) needs to be print(roundGrade(np.array([-2.1,6.3,8.9,9]))) Notice the extra parentheses: np.array(<whatever>) However, this won't work, since your function expects a single number. Fortunately, numpy provides a function which can fix that for you: In [15]: roundGrade = np.vectorize(roundGrade) In [16]: roundGrade(np.array([-2.1,6.3,8.9,9])) Out[16]: array([-3, 7, 10, 10]) http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.vectorize.html
mod_wsgi: Reload Code via Inotify - not every N seconds
Up to now I followed this advice to reload the code: https://code.google.com/archive/p/modwsgi/wikis/ReloadingSourceCode.wiki This has the drawback, that the code changes get detected only every N second. I could use N=0.1, but this results in useless disk IO. AFAIK the inotify callback of the linux kernel is available via python. Is there a faster way to detect code changes and restart the wsgi handler? We use daemon mode on linux. Why code reload for mod_wsgi at all There is interest in why I want this at all. Here is my setup: Most people use "manage.py runserver" for development and some other wsgi deployment for for production. In my context we have automated the creation of new systems and prod and development systems are mostly identical. One operating system (linux) can host N systems (virtual environments). Developers can use runserver or mod_wsgi. Using runserver has the benefit that it's easy for debugging, mod_wsgi has the benefit that you don't need to start the server first. mod_wsgi has the benefit, that you know the URL: https://dev-server/system-name/myurl/ With runserver you don't know the port. Use case: You want to link from an internal wiki to a dev-system .... A dirty hack to get code reload for mod_wsgi, which we used in the past: maximum-requests=1 but this is slow.
Preliminaries. Developers can use runserver or mod_wsgi. Using runserver has the benefit that you it easy for debugging, mod_wsgi has the benefit that you don't need to start the server first. But you do, the server needs to be setup first and that takes a lot of effort. And the server needs to be started here as well though you can configure it to start automatically at boot. If you are running on port 80 or 443 which is usually the case, the server can be started only by the root. If it needs to be restarted you will have to ask the super user's help again. So ./manage.py runserver scores heavily here. mod_wsgi has the benefit, that you know the URL: https://dev-server/system-name/myurl/ Which is no different from the dev server. By default it starts on port 8000 so you can access it as http://dev-server:8000/system-name/myurl/. If you wanted to use SSL with the development server you can use a package such as django-sslserver or you can put nginx in front of django development server. With runserver you don't know the port. Use case: You want to link from >an internal wiki to a dev-system .... With runserver, the port is well defined as mentioned above. And you can make it listen on a different port for exapmle with: ./manage.py runserver 0.0.0.0:9090 Note that if you put development server behind apache (as a reverse proxy) or NGINX, restarting problems etc that I have mentioned above do not apply here. So in short, for development work, what ever you do with mod_wsgi can be done with the django development server (aka ./manage.py runserver). Inotify Here we are getting to the main topic at last. Assuming you have installed inotify-tools you could type this into your shell. You don't need to write a script. while inotifywait -r -e modify .; do sudo kill -2 yourpid ; done This will result in the code being reloaded when ... ... using daemon mode with a single process you can send a SIGINT signal to the daemon process using the ‘kill’ command, or have the application send the signal to itself when a specific URL is triggered. ref: http://modwsgi.readthedocs.io/en/develop/user-guides/frequently-asked-questions.html#application-reloading alternatively while inotifywait -r -e modify .; do touch wsgi.py ; done when ... using daemon mode, with any number of processes, and the process reload mechanism of mod_wsgi 2.0 has been enabled, then all you need to do is touch the WSGI script file, thereby updating its modification time, and the daemon processes will automatically shutdown and restart the next time they receive a request. In both situations we are using the -r flag to tell inotify to monitor subdirectories. That means each time you save a .css or .js file apache will reload. But without the -r flag changes to python code in subfolders will be undetected. To have the best of both worls, remove css, js, images etc with the --exclude directive. What about when your IDE saves an auto backup file? or vim saves the .swp file? That too will cause a code reload. So you would have to exclude those file types too. So in short, it's a lot of hard work to reproduce what the django development server does free of charge.
Meaning of '>>' in Python byte code
I have disassembled the following python code def factorial(n): if n <= 1: return 1 elif n == 2: return 2 elif n ==4: print('hi') return n * 2 and the resulting bytecode 2 0 LOAD_FAST 0 (n) 3 LOAD_CONST 1 (1) 6 COMPARE_OP 1 (<=) 9 POP_JUMP_IF_FALSE 16 3 12 LOAD_CONST 1 (1) 15 RETURN_VALUE 4 >> 16 LOAD_FAST 0 (n) 19 LOAD_CONST 2 (2) 22 COMPARE_OP 2 (==) 25 POP_JUMP_IF_FALSE 32 5 28 LOAD_CONST 2 (2) 31 RETURN_VALUE 6 >> 32 LOAD_FAST 0 (n) 35 LOAD_CONST 3 (4) 38 COMPARE_OP 2 (==) 41 POP_JUMP_IF_FALSE 52 7 44 LOAD_CONST 4 ('hi') 47 PRINT_ITEM 48 PRINT_NEWLINE 49 JUMP_FORWARD 0 (to 52) 8 >> 52 LOAD_FAST 0 (n) 55 LOAD_CONST 2 (2) 58 BINARY_MULTIPLY 59 RETURN_VALUE What do the '>>' symbols in the above bytecode stand for?
They are jump targets; positions earlier *JUMP* bytecode jumps to when the condition is met. The first jump: 9 POP_JUMP_IF_FALSE 16 jumps to offset 16, so at offset 16 the output has a target >>: 4 >> 16 LOAD_FAST 0 (n) From the dis.disassemble() function docs names each column: [...] a labelled instruction, indicated with >>, and the dis.findlabels() function: Detect all offsets in the code object code which are jump targets, and return a list of these offsets.
Is there a Python constant for Unicode whitespace?
The string module contains a whitespace attribute, which is a string consisting of all the ASCII characters that are considered whitespace. Is there a corresponding constant that includes Unicode spaces too, such as the no-break space (U+00A0)? We can see from the question "strip() and strip(string.whitespace) give different results" that at least strip is aware of additional Unicode whitespace characters. This question was identified as a duplicate of In Python, how to list all characters matched by POSIX extended regex [:space:]?, but the answers to that question identify ways of searching for whitespace characters to generate your own list. This is a time-consuming process. My question was specifically about a constant.
Is there a Python constant for Unicode whitespace? Short answer: No. I have personally grepped for these characters (specifically, the numeric code points) in the Python code base, and such a constant is not there. The sections below explains why it is not necessary, and how it is implemented without this information being available as a constant. But having such a constant would also be a really bad idea. If the Unicode Consortium added another character/code-point that is semantically whitespace, the maintainers of Python would have a poor choice between continuing to support semantically incorrect code or changing the constant and possibly breaking pre-existing code that might (inadvisably) make assumptions about the constant not changing. How could it add these character code-points? There are 1,111,998 possible characters in Unicode. But only 120,672 are occupied as of version 8. Each new version of Unicode may add additional characters. One of these new characters might be a form of whitespace. The information is stored in a dynamically generated C function The code that determines what is whitespace in unicode is the following dynamically generated code. # Generate code for _PyUnicode_IsWhitespace() print("/* Returns 1 for Unicode characters having the bidirectional", file=fp) print(" * type 'WS', 'B' or 'S' or the category 'Zs', 0 otherwise.", file=fp) print(" */", file=fp) print('int _PyUnicode_IsWhitespace(const Py_UCS4 ch)', file=fp) print('{', file=fp) print(' switch (ch) {', file=fp) for codepoint in sorted(spaces): print(' case 0x%04X:' % (codepoint,), file=fp) print(' return 1;', file=fp) print(' }', file=fp) print(' return 0;', file=fp) print('}', file=fp) print(file=fp) This is a switch statement, which is a constant code block, but this information is not available as a module "constant" like the string module has. It is instead buried in the function compiled from C and not directly accessible from Python. This is likely because as more code points are added to Unicode, we would not be able to change constants for backwards compatibility reasons. The Generated Code Here's the generated code currently at the tip: int _PyUnicode_IsWhitespace(const Py_UCS4 ch) { switch (ch) { case 0x0009: case 0x000A: case 0x000B: case 0x000C: case 0x000D: case 0x001C: case 0x001D: case 0x001E: case 0x001F: case 0x0020: case 0x0085: case 0x00A0: case 0x1680: case 0x2000: case 0x2001: case 0x2002: case 0x2003: case 0x2004: case 0x2005: case 0x2006: case 0x2007: case 0x2008: case 0x2009: case 0x200A: case 0x2028: case 0x2029: case 0x202F: case 0x205F: case 0x3000: return 1; } return 0; } Making your own constant: The following code (from my answer here), in Python 3, generates a constant of all whitespace: import re import sys s = ''.join(chr(c) for c in range(sys.maxunicode+1)) ws = ''.join(re.findall(r'\s', s)) As an optimization, you could store this in a code base, instead of auto-generating it every new process, but I would caution against assuming that it would never change. >>> ws '\t\n\x0b\x0c\r\x1c\x1d\x1e\x1f \x85\xa0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200a\u2028\u2029\u202f\u205f\u3000' (Other answers to the question linked show how to get that for Python 2.) Remember that at one point, some people probably thought 256 character encodings was all that we'd ever need. >>> import string >>> string.whitespace ' \t\n\r\x0b\x0c' If you're insisting on keeping a constant in your code base, just generate the constant for your version of Python, and store it as a literal: unicode_whitespace = u'\t\n\x0b\x0c\r\x1c\x1d\x1e\x1f \x85\xa0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200a\u2028\u2029\u202f\u205f\u3000' The u prefix makes it unicode in Python 2 (2.7 happens to recognize the entire string above as whitespace too), and in Python 3 it is ignored as string literals are unicode by default.
Cut within a pattern using Python regex
Objective: I am trying to perform a cut in Python RegEx where split doesn't quite do what I want. I need to cut within a pattern, but between characters. What I am looking for: I need to recognize the pattern below in a string, and split the string at the location of the pipe. The pipe isn't actually in the string, it just shows where I want to split. Pattern: CDE|FG String: ABCDEFGHIJKLMNOCDEFGZYPE Results: ['ABCDE', 'FGHIJKLMNOCDE', 'FGZYPE'] What I have tried: I seems like using split with parenthesis is close, but it doesn't keep the search pattern attached to the results like I need it to. re.split('CDE()FG', 'ABCDEFGHIJKLMNOCDEFGZYPE') Gives, ['AB', 'HIJKLMNO', 'ZYPE'] When I actually need, ['ABCDE', 'FGHIJKLMNOCDE', 'FGZYPE'] Motivation: Practicing with RegEx, and wanted to see if I could use RegEx to make a script that would predict the fragments of a protein digestion using specific proteases.
A non regex way would be to replace the pattern with the piped value and then split. >>> pattern = 'CDE|FG' >>> s = 'ABCDEFGHIJKLMNOCDEFGZYPE' >>> s.replace('CDEFG',pattern).split('|') ['ABCDE', 'FGHIJKLMNOCDE', 'FGZYPE']
Empty class size in python
I just trying to know the rationale behind the empty class size in python, In C++ as everyone knows the size of empty class will always shows 1 byte(as far as i have seen) this let the run time to create unique object,and i trying to find out what size of empty class in python: class Empty:pass # i hope this will create empty class and when i do import sys print ("show",sys.getsizeof(Empty)) # i get 1016 I wonder why the Empty takes this much 1016(bytes)? and also does the value(1016) it returns is this a standard value that never change(mostly) like C++?, Do we expect any zero base class optimization from python interpreter?Is there any way we can reduce the size of am Empty(just for curiosity sake)?
I assume you are running a 64 bit version of Python 3. On 32 bit Python 3.6 (on Linux), your code prints show 508. However, that's the size of the class object itself, which inherits quite a lot of things from the base object class. If you instead get the size of an instance of your class the result is much smaller. On my system, import sys class Empty(object): pass print("show", sys.getsizeof(Empty())) output show 28 which is a lot more compact. :) FWIW, on Python 2.6.6, sys.getsizeof(Empty) returns 448 for a new-style class, and a measly 44 for an old-style class (one that doesn't inherit from object). sys.getsizeof(Empty()) returns 28 for a new-style class instance and 32 for an old-style. You can reduce the size of an instance by using __slots__ This class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. __slots__ reserves space for the declared variables and prevents the automatic creation of __dict__ and __weakref__ for each instance. import sys class Empty(object): __slots__ = [] print("show", sys.getsizeof(Empty())) output show 8 Please read the docs to understand how to use this feature.
Implementing an asynchronous iterator
Per PEP-492 I am trying to implement an asynchronous iterator, such that I can do e.g. async for foo in bar: ... Here is a trivial example, similar to the one in the docs, with a very basic test of instantiation and async iteration: import pytest class TestImplementation: def __aiter__(self): return self async def __anext__(self): raise StopAsyncIteration @pytest.mark.asyncio # note use of pytest-asyncio marker async def test_async_for(): async for _ in TestImplementation(): pass However, when I execute my test suite, I see: =================================== FAILURES =================================== ________________________________ test_async_for ________________________________ @pytest.mark.asyncio async def test_async_for(): > async for _ in TestImplementation(): E TypeError: 'async for' received an invalid object from __aiter__: TestImplementation ...: TypeError ===================== 1 failed, ... passed in 2.89 seconds ====================== Why does my TestImplementation appear to be invalid? As far as I can tell it meets the protocol: An object must implement an __aiter__ method ... returning an asynchronous iterator object. An asynchronous iterator object must implement an __anext__ method ... returning an awaitable. To stop iteration __anext__ must raise a StopAsyncIteration exception. This is failing with the latest released versions of Python (3.5.1), py.test (2.9.2) and pytest-asyncio (0.4.1).
If you read a little further down the documentation it mentions that (emphasis mine): PEP 492 was accepted in CPython 3.5.0 with __aiter__ defined as a method, that was expected to return an awaitable resolving to an asynchronous iterator. In 3.5.2 (as PEP 492 was accepted on a provisional basis) the __aiter__ protocol was updated to return asynchronous iterators directly. Therefore for versions prior to 3.5.2 (released 2016/6/27) the documentation is slightly out of step with how to write a working asynchronous iterator. The fixed version for 3.5.0 and 3.5.1 looks like: class TestImplementation: async def __aiter__(self): # ^ note return self async def __anext__(self): raise StopAsyncIteration This was introduced on closing bug #27243 and is a little clearer in the data model documentation, which also suggests a way of writing backwards compatible code.
Django Left Outer Join
I have a website where users can see a list of movies, and create reviews for them. The user should be able to see the list of all the movies. Additionally, IF they have reviewed the movie, they should be able to see the score that they gave it. If not, the movie is just displayed without the score. They do not care at all about the scores provided by other users. Consider the following models.py from django.contrib.auth.models import User from django.db import models class Topic(models.Model): name = models.TextField() def __str__(self): return self.name class Record(models.Model): user = models.ForeignKey(User) topic = models.ForeignKey(Topic) value = models.TextField() class Meta: unique_together = ("user", "topic") What I essentially want is this select * from bar_topic left join (select topic_id as tid, value from bar_record where user_id = 1) on tid = bar_topic.id Consider the following test.py for context: from django.test import TestCase from bar.models import * from django.db.models import Q class TestSuite(TestCase): def setUp(self): t1 = Topic.objects.create(name="A") t2 = Topic.objects.create(name="B") t3 = Topic.objects.create(name="C") # 2 for Johnny johnny = User.objects.create(username="Johnny") johnny.record_set.create(topic=t1, value=1) johnny.record_set.create(topic=t3, value=3) # 3 for Mary mary = User.objects.create(username="Mary") mary.record_set.create(topic=t1, value=4) mary.record_set.create(topic=t2, value=5) mary.record_set.create(topic=t3, value=6) def test_raw(self): print('\nraw\n---') with self.assertNumQueries(1): topics = Topic.objects.raw(''' select * from bar_topic left join (select topic_id as tid, value from bar_record where user_id = 1) on tid = bar_topic.id ''') for topic in topics: print(topic, topic.value) def test_orm(self): print('\norm\n---') with self.assertNumQueries(1): topics = Topic.objects.filter(Q(record__user_id=1)).values_list('name', 'record__value') for topic in topics: print(*topic) BOTH tests should print the exact same output, however, only the raw version spits out the correct table of results: raw --- A 1 B None C 3 the orm instead returns this orm --- A 1 C 3 Any attempt to join back the rest of the topics, those that have no reviews from user "johnny", result in the following: orm --- A 1 A 4 B 5 C 3 C 6 How can I accomplish the simple behavior of the raw query with the Django ORM? edit: This sort of works but seems very poor: topics = Topic.objects.filter(record__user_id=1).values_list('name', 'record__value') noned = Topic.objects.exclude(record__user_id=1).values_list('name') for topic in chain(topics, noned): ... edit: This works a little bit better, but still bad: topics = Topic.objects.filter(record__user_id=1).annotate(value=F('record__value')) topics |= Topic.objects.exclude(pk__in=topics) orm --- A 1 B 5 C 3
First of all, there is not a way (atm Django 1.9.7) to have a representation with Django's ORM of the raw query you posted, exactly as you want; although, you can get the same desired result with something like: >>> Topic.objects.annotate(f=Case(When(record__user=johnny, then=F('record__value')), output_field=IntegerField())).order_by('id', 'name', 'f').distinct('id', 'name').values_list('name', 'f') >>> [(u'A', 1), (u'B', None), (u'C', 3)] >>> Topic.objects.annotate(f=Case(When(record__user=may, then=F('record__value')), output_field=IntegerField())).order_by('id', 'name', 'f').distinct('id', 'name').values_list('name', 'f') >>> [(u'A', 4), (u'B', 5), (u'C', 6)] Here the SQL generated for the first query: >>> print Topic.objects.annotate(f=Case(When(record__user=johnny, then=F('record__value')), output_field=IntegerField())).order_by('id', 'name', 'f').distinct('id', 'name').values_list('name', 'f').query >>> SELECT DISTINCT ON ("payments_topic"."id", "payments_topic"."name") "payments_topic"."name", CASE WHEN "payments_record"."user_id" = 1 THEN "payments_record"."value" ELSE NULL END AS "f" FROM "payments_topic" LEFT OUTER JOIN "payments_record" ON ("payments_topic"."id" = "payments_record"."topic_id") ORDER BY "payments_topic"."id" ASC, "payments_topic"."name" ASC, "f" ASC Some notes Doesn't hesitate to use raw queries, specially when the performance is the most important thing. Moreover, sometimes it is a must since you can't get the same result using Django's ORM; in other cases you can, but once in a while having clean and understandable code is more important than the performance in this piece of code. distinct with positional arguments is used in this answer, which is available for PostgreSQL only, atm. In the docs you can see more about conditional expressions.
Rename a worksheet in google Spreadsheet using API - python
I have been trying/looking to solve this problem for a long while. I have read the documentation for gspread and I cannot find that there is a way to rename a worksheet. Any of you know how to? I would massively appreciate it! There is indeed worksheet.title which gives the name of the worksheet, but I cannot find a way to rename the actual sheet. Thank you in advance!
This is an extraction of a library which I've coded personally: def _batch(self, requests): body = { 'requests': requests } return self._service.spreadsheets().batchUpdate(spreadsheetId=self.spreadsheetId, body=body).execute() def renameSheet(self, sheetId, newName): return self._batch({ "updateSheetProperties": { "properties": { "sheetId": sheetId, "title": newName, }, "fields": "title", } }) I think that with a little effort, you can implement it into your code and obtain what you want. In order to make the batchUpdate call, you will need the spreadsheetId as well as the initialized service as explained in the Python QUickstart - Google Sheet API
Is there any legitimate use of list[True], list[False] in Python?
Since True and False are instances of int, the following is valid in Python: >>> l = [0, 1, 2] >>> l[False] 0 >>> l[True] 1 I understand why this happens. However, I find this behaviour a bit unexpected and can lead to hard-to-debug bugs. It has certainly bitten me a couple of times. Can anyone think of a legit use of indexing lists with True or False?
In the past, some people have used this behaviour to produce a poor-man's conditional expression: ['foo', 'bar'][eggs > 5] # produces 'bar' when eggs is 6 or higher, 'foo' otherwise However, with a proper conditional expression having been added to the language in Python 2.5, this is very much frowned upon, for the reasons you state: relying on booleans being a subclass of integers is too 'magical' and unreadable for a maintainer. So, unless you are code-golfing (deliberately producing very compact and obscure code), use 'bar' if eggs > 5 else 'foo' instead, which has the added advantage that the two expressions this selects between are lazily evaluated; if eggs > 5 is false, the expression before the if is never executed.
How do I reliably split a string in Python?
In Perl I can do: my ($x, $y) = split /:/, $str; And it will work whether or not the string contains the pattern. In Python, however this won't work: a, b = "foo".split(":") # ValueError: not enough values to unpack What's the canonical way to prevent errors in such cases?
If you're splitting into just two parts (like in your example) you can use str.partition() to get a guaranteed argument unpacking size of 3: >>> a, sep, b = "foo".partition(":") >>> a, sep, b ('foo', '', '') str.partition() always returns a 3-tuple, whether the separator is found or not. Another alternative for Python 3 is to use extended unpacking, as described in @cdarke's answer: >>> a, *b = "foo".split(":") >>> a, b ('foo', []) This assigns the first split item to a and the list of remaining items (if any) to b.
What is a good explanation of how to read the histogram feature of TensorBoard?
Question is simple, how do you read those graphs? I read their explanation and it doesn't make sense to me. I was reading TensorFlow's newly updated readme file for TensorBoard and in it it tries to explain what a "histogram" is. First it clarifies that its not really a histogram: Right now, its name is a bit of a misnomer, as it doesn't show histograms; instead, it shows some high-level statistics on a distribution. I am trying to figure out what their description is actually trying to say. Right now I am trying to parse the specific sentence: Each line on the chart represents a percentile in the distribution over the data: for example, the bottom line shows how the minimum value has changed over time, and the line in the middle shows how the median has changed. The first question I have is, what do they mean by "each line". There are horizontal axis and there are lines that make a square grid on the graph or maybe the plotted lines, themselves. Consider a screen shot from the TensorBoard example: What are they referring to with "lines"? In the above example what are the lines and percentiles that they are talking about? Then the readme file tries to provide more detail with an example: Reading from top to bottom, the lines have the following meaning: [maximum, 93%, 84%, 69%, 50%, 31%, 16%, 7%, minimum] However, its unclear to me what they are talking about. What is lines and what percentiles? It seems that they are trying to replace this in the future, but meanwhile, I am stuck with this. Can someone help me understand how to use this?
The lines that they are talking about are described below: as for the meaning of percentile, check out the wikipedia article, basically, the 93rd percentile means that 93% of the values are situated below the 93rd percentile line
compare list of datetime to dict of datetime
I have a task to create sets of dates based on specific condition, for example "greater than 2" will be passed and I need to create a set of all dates in this month that have a day > 2. also Ill be getting a start time and a stop time for e.g. 10am-6pm in this case I will create a set of all the dates > 2 and in every day it has a time to start at 10am and ends and 6pm, below is an example: greater > 2 less < 9 start time :10am stop time :6 pm month:july date1: 2016-07-03 10:00, 2016-07-03 16:00 date2: 2016-07-04 10:00, 2016-07-04 16:00 date3: 2016-07-05 10:00, 2016-07-05 16:00 . . . date6: 2016-07-8 10:00, 2016-07-8 16:00 I decided to store these dates into a dictionary like the following: dictD = {'dates_between_2_9':[[2016-07-03 10:00, 2016-07-03 16:00], [2016-07-04 10:00, 2016-07-04 16:00], ....., [2016-07-08 10:00, 2016-07-08 16:00]]} I used the dict because I will have multiple conditions that I need to create sets of dates for them, so there will be for example another key other than dates_between_2_5. at the other hand I get another request based on a condition too to create dates with start time only like the following: greater > 1 less than 12 start time : 2pm date1: 2016-07-02 14:00 date2: 2016-07-03 14:00 date3: 2016-07-04 14:00 . . . date10: 2016-07-11 14:00 I decided to store these dates in a list: listL = [2016-07-02 14:00,2016-07-03 14:00,2016-07-04 14:00 ... 2016-07-11 14:00] after that I compare each date from ListL to the list of dates for each key from DictD and if a date from ListL lies within a start,stop time then I should remove it from the list and return only the dates from ListL that don't overlap with dates from DictD, my logic is like the following: for L from ListL: for every key in DictD: for item from DictD[key]: if DictD[key][0] < L < DictD[key][1] # check if item from list overlap with start,stop time from dictionary. ListL.remove(L) # I know I can't remove items from list while iterating so I will probably create a set and store all overlapped items and then subtract this set to set(ListL) to get the difference. return ListL My question is, am I using an efficient data structures to handle my requirements? I see my logic is not that efficient so I was wondering if there is a better way for approaching this problem? any help would be greatly appreciated. thanks in advance!
It sounds like you're trying to optimize your algorithm. To be honest, with data this size, it's probably not necessary. However, if you are interested, the general rule of thumb is that sets are faster than lists in Python when checking for membership. In this case, it's not clear what your sets might be. I've assumed that you have at most a minute-level of granularity, but you could go lower (for more memory) or indeed improve occupancy and performance by going for a larger granularity - e.g. hours. This code shows even relatively large sets can be at least 5x faster (and look a little simpler when comparing your data sets): from copy import copy from datetime import datetime, timedelta from timeit import timeit import time def make_range(start, open, close, days): result = [] base_start = start + open base_close = start + close while days > 0: result.append([base_start, base_close]) base_start += timedelta(days=1) base_close += timedelta(days=1) days -= 1 return result def make_range2(start, open, close, days): result = set() base_start = start + open base_close = start + close while days > 0: now = base_start while now <= base_close: result.add(now) now += timedelta(minutes=1) base_start += timedelta(days=1) base_close += timedelta(days=1) days -= 1 return result dateRange = { 'range1': make_range(datetime(2016, 7, 3, 0, 0), timedelta(hours=10), timedelta(hours=18), 6), } dateRange2 = { 'range1': make_range2(datetime(2016, 7, 3, 0, 0), timedelta(hours=10), timedelta(hours=18), 6), } dateList = [ datetime(2016, 7, 2, 14, 0), datetime(2016, 7, 3, 14, 0), datetime(2016, 7, 4, 14, 0), datetime(2016, 7, 5, 14, 0), datetime(2016, 7, 6, 14, 0), datetime(2016, 7, 7, 14, 0), datetime(2016, 7, 8, 14, 0), datetime(2016, 7, 9, 14, 0), datetime(2016, 7, 10, 14, 0), datetime(2016, 7, 11, 14, 0) ] dateSet = set(dateList) def f1(): result = copy(dateList) for a in dateList: for b in dateRange: for i in dateRange[b]: if i[0] <= a <= i[1]: result.remove(a) return result def f2(): result = copy(dateSet) for b in dateRange2: result = result.difference(dateRange2[b]) return result print(f1()) print(timeit("f1()", "from __main__ import f1", number=100000)) print(f2()) print(timeit("f2()", "from __main__ import f2", number=100000)) For the record, the results are as follows: [datetime.datetime(2016, 7, 2, 14, 0), datetime.datetime(2016, 7, 9, 14, 0), datetime.datetime(2016, 7, 10, 14, 0), datetime.datetime(2016, 7, 11, 14, 0)] 1.922587754837455 {datetime.datetime(2016, 7, 2, 14, 0), datetime.datetime(2016, 7, 9, 14, 0), datetime.datetime(2016, 7, 10, 14, 0), datetime.datetime(2016, 7, 11, 14, 0)} 0.30558400587733225 You could also convert the dict dateRange to a list, but with just 1 or 2 members, this is unlikely to make any real difference in performance. However, it makes more logical sense, as you are not actually using the dict to look up any specific key values - you are just iterating through all the values.
Digit separators in Python code
Is there any way to group digits in a Python code to increase code legibility? I've tried ' and _ which are digit separators of some other languages, but no avail. A weird operator which concatenates its left hand side with its right hand side could also work out.
This is not implemented in python at the present time. You can look at the lexical analysis for strict definitions python2.7, python3.5 ... Supposedly it will be implemented for python3.6, but it doesn't look like the documentation has been updated for that yet, nor is it available in python3.6.0a2: Python 3.6.0a2 (v3.6.0a2:378893423552, Jun 13 2016, 14:44:21) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 1_000 File "<stdin>", line 1 1_000 ^ SyntaxError: invalid syntax >>> amount = 10_000_000.0 File "<stdin>", line 1 amount = 10_000_000.0 ^ SyntaxError: invalid syntax When it is implemented, you'll be able to use _ in your integer and float literals...
Find unique pairs in list of pairs
I have a (large) list of lists of integers, e.g., a = [ [1, 2], [3, 6], [2, 1], [3, 5], [3, 6] ] Most of the pairs will appear twice, where the order of the integers doesn't matter (i.e., [1, 2] is equivalent to [2, 1]). I'd now like to find the pairs that appear only once, and get a Boolean list indicating that. For the above example, b = [False, False, False, True, False] Since a is typically large, I'd like to avoid explicit loops. Mapping to frozensets may be advised, but I'm not sure if that's overkill.
ctr = Counter(frozenset(x) for x in a) b = [ctr[frozenset(x)] == 1 for x in a] We can use Counter to get counts of each list (turn list to frozenset to ignore order) and then for each list check if it only appears once.
Two variables in Python have same id, but not lists or tuples
Two variables in Python have the same id: a = 10 b = 10 a is b >>> True If I take two lists: a = [1, 2, 3] b = [1, 2, 3] a is b >>> False according to this link Senderle answered that immutable object references have the same id and mutable objects like lists have different ids. So now according to his answer, tuples should have the same ids - meaning: a = (1, 2, 3) b = (1, 2, 3) a is b >>> False Ideally, as tuples are not mutable, it should return True, but it is returning False! What is the explanation?
Immutable objects don't have the same id, and as a mater of fact this is not true for any type of objects that you define separately. Every time you define an object in Python, you'll create a new object with a new identity. But there are some exceptions for small integers (between -5 and 256) and small strings (interned strings, with a special length (usually less than 20 character)) which are singletons and have same id (actually one object with multiple pointer). You can check this fact like following: >>> 30 is 20 + 10 True >>> >>> 300 is 200 + 100 False >>> 'aa' * 2 is 'a' * 4 True >>> 'aa' * 20 is 'a' * 40 False And for a custom object: >>> class A: ... pass ... >>> A() is A() # Every time you create an instance you'll have a new instance with new identity False Also note that the is operator will check the object's identity, not the value. If you want to check the value you should use ==: >>> 300 == 3*100 True And since there is no such rule for tuples (other types) if you define the two same tuples in any size they'll get their own ids: >>> a = (1,) >>> b = (1,) >>> >>> a is b False And note that the fact of singleton integers and interned strings is true even when you define them within mutable and immutable objects: >>> a = (100, 700, 400) >>> >>> b = (100, 700, 400) >>> >>> a[0] is b[0] True >>> a[1] is b[1] False
Exception during list comprehension. Are intermediate results kept anywhere?
When using try-except in a for loop context, the commands executed so far are obviously done with a = [1, 2, 3, 'text', 5] b = [] try: for k in range(len(a)): b.append(a[k] + 4) except: print('Error!') print(b) results with Error! [5, 6, 7] However the same is not true for list comprehensions c=[] try: c = [a[k] + 4 for k in range(len(a))] except: print('Error!') print(c) And the result is Error! [] Is the intermediate list, built before the exception occurred, kept anywhere? Is it accessible?
The list comprehension intermediate results are kept on an internal CPython stack, and are not accessible from the Python expressions that are part of the list comprehension. Note that Python executes the [.....] first, which produces a list object, and only then assigns that result to the name c. If an exception occurs within the [....] expression, the expression is terminated and exception handling kicks in instead. Your print(c) expression thus can only ever show the previous object that c was bound to, which here is an empty list object. It could have been anything else: >>> c = 'Anything else' >>> try: ... c = [2 // i for i in (1, 0)] ... except ZeroDivisionError: ... pass ... >>> c 'Anything else' In your first example, no new list object is produced. You instead manipulate (using b.append()) an existing list object, which is why you can see what all successful b.append() calls have done to it.
How to split data into 3 sets (train, validation and test)?
I have a pandas dataframe and I wish to divide it to 3 seprate sets. I know that using train_test_split from sklearn.cross_validation, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use train_test_split two times and somehow adjust the indices. But is there a more standard / built-in way to split the data into 3 sets instead of 2?
Numpy solution (thanks to root for the randomizing hint) - we will split our data set into the following parts: (60% - train set, 20% - validation set, 20% - test set): In [305]: train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))]) In [306]: train Out[306]: A B C D E 0 0.046919 0.792216 0.206294 0.440346 0.038960 2 0.301010 0.625697 0.604724 0.936968 0.870064 1 0.642237 0.690403 0.813658 0.525379 0.396053 9 0.488484 0.389640 0.599637 0.122919 0.106505 8 0.842717 0.793315 0.554084 0.100361 0.367465 7 0.185214 0.603661 0.217677 0.281780 0.938540 In [307]: validate Out[307]: A B C D E 5 0.806176 0.008896 0.362878 0.058903 0.026328 6 0.145777 0.485765 0.589272 0.806329 0.703479 In [308]: test Out[308]: A B C D E 4 0.521640 0.332210 0.370177 0.859169 0.401087 3 0.333348 0.964011 0.083498 0.670386 0.169619 PS [int(.6*len(df)), int(.8*len(df))] - is an indices_or_sections array for numpy.split() Here is a small demo for np.split() usage - let's split 20-elements array into the following parts: 90%, 10%, 10%: In [45]: a = np.arange(1, 21) In [46]: a Out[46]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))]) Out[47]: [array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), array([17, 18]), array([19, 20])]
Create a list of tuples with adjacent list elements if a condition is true
I am trying to create a list of tuples where the tuple contents are the number 9 and the number before it in the list. Input List: myList = [1, 8, 9, 2, 4, 9, 6, 7, 9, 8] Desired Output: sets = [(8, 9), (4, 9), (7, 9)] Code: sets = [list(zip(myList[i:i], myList[-1:])) for i in myList if i==9] Current Result: [[], [], []]
Cleaner Pythonic approach: >>> [(x,y) for x,y in zip(myList, myList[1:]) if y == 9] [(8, 9), (4, 9), (7, 9)] What is the code above doing: zip(some_list, some_list[1:]) would generate a list of pairs of adjacent elements. Now with that tuple, filter on the condition that the second element is equal to 9. You're done :)
Comparison of Pandas lookup times
After experimenting with timing various types of lookups on a Pandas DataFrame I am left with a few questions. Here is the set up... import pandas as pd import numpy as np import itertools letters = [chr(x) for x in range(ord('a'), ord('z'))] letter_combinations = [''.join(x) for x in itertools.combinations(letters, 3)] df1 = pd.DataFrame({ 'value': np.random.normal(size=(1000000)), 'letter': np.random.choice(letter_combinations, 1000000) }) df2 = df1.sort_values('letter') df3 = df1.set_index('letter') df4 = df3.sort_index() So df1 looks something like this... print(df1.head(5)) >>> letter value 0 bdh 0.253778 1 cem -1.915726 2 mru -0.434007 3 lnw -1.286693 4 fjv 0.245523 Here is the code to test differences in lookup performance... print('~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / UNSORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df1[df1.letter == 'ben'] %timeit df1[df1.letter == 'amy'] %timeit df1[df1.letter == 'abe'] print('~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / SORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df2[df2.letter == 'ben'] %timeit df2[df2.letter == 'amy'] %timeit df2[df2.letter == 'abe'] print('~~~~~~~~~~~~~~~~~~~~~INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df3.loc['ben'] %timeit df3.loc['amy'] %timeit df3.loc['abe'] print('~~~~~~~~~~~~~~~~~~~~~SORTED INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df4.loc['ben'] %timeit df4.loc['amy'] %timeit df4.loc['abe'] And the results... ~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / UNSORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10 loops, best of 3: 59.7 ms per loop 10 loops, best of 3: 59.7 ms per loop 10 loops, best of 3: 59.7 ms per loop ~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / SORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10 loops, best of 3: 192 ms per loop 10 loops, best of 3: 192 ms per loop 10 loops, best of 3: 193 ms per loop ~~~~~~~~~~~~~~~~~~~~~INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The slowest run took 4.66 times longer than the fastest. This could mean that an intermediate result is being cached 10 loops, best of 3: 40.9 ms per loop 10 loops, best of 3: 41 ms per loop 10 loops, best of 3: 40.9 ms per loop ~~~~~~~~~~~~~~~~~~~~~SORTED INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The slowest run took 1621.00 times longer than the fastest. This could mean that an intermediate result is being cached 1 loops, best of 3: 259 µs per loop 1000 loops, best of 3: 242 µs per loop 1000 loops, best of 3: 243 µs per loop Questions... It's pretty clear why the lookup on the sorted index is so much faster, binary search to get O(log(n)) performance vs O(n) for a full array scan. But, why is the lookup on the sorted non-indexed df2 column SLOWER than the lookup on the unsorted non-indexed column df1? What is up with the The slowest run took x times longer than the fastest. This could mean that an intermediate result is being cached. Surely, the results aren't being cached. Is it because the created index is lazy and isn't actually reindexed until needed? That would explain why it is only on the first call to .loc[]. Why isn't an index sorted by default? The fixed cost of the sort can be too much?
The disparity in these %timeit results In [273]: %timeit df1[df1['letter'] == 'ben'] 10 loops, best of 3: 36.1 ms per loop In [274]: %timeit df2[df2['letter'] == 'ben'] 10 loops, best of 3: 108 ms per loop also shows up in the pure NumPy equality comparisons: In [275]: %timeit df1['letter'].values == 'ben' 10 loops, best of 3: 24.1 ms per loop In [276]: %timeit df2['letter'].values == 'ben' 10 loops, best of 3: 96.5 ms per loop Under the hood, Pandas' df1['letter'] == 'ben' calls a Cython function which loops through the values of the underlying NumPy array, df1['letter'].values. It is essentially doing the same thing as df1['letter'].values == 'ben' but with different handling of NaNs. Moreover, notice that simply accessing the items in df1['letter'] in sequential order can be done more quickly than doing the same for df2['letter']: In [11]: %timeit [item for item in df1['letter']] 10 loops, best of 3: 49.4 ms per loop In [12]: %timeit [item for item in df2['letter']] 10 loops, best of 3: 124 ms per loop The difference in times within each of these three sets of %timeit tests are roughly the same. I think that is because they all share the same cause. Since the letter column holds strings, the NumPy arrays df1['letter'].values and df2['letter'].values have dtype object and therefore they hold pointers to the memory location of the arbitrary Python objects (in this case strings). Consider the memory location of the strings stored in the DataFrames, df1 and df2. In CPython the id returns the memory location of the object: memloc = pd.DataFrame({'df1': list(map(id, df1['letter'])), 'df2': list(map(id, df2['letter'])), }) df1 df2 0 140226328244040 140226299303840 1 140226328243088 140226308389048 2 140226328243872 140226317328936 3 140226328243760 140226230086600 4 140226328243368 140226285885624 The strings in df1 (after the first dozen or so) tend to appear sequentially in memory, while sorting causes the strings in df2 (taken in order) to be scattered in memory: In [272]: diffs = memloc.diff(); diffs.head(30) Out[272]: df1 df2 0 NaN NaN 1 -952.0 9085208.0 2 784.0 8939888.0 3 -112.0 -87242336.0 4 -392.0 55799024.0 5 -392.0 5436736.0 6 952.0 22687184.0 7 56.0 -26436984.0 8 -448.0 24264592.0 9 -56.0 -4092072.0 10 -168.0 -10421232.0 11 -363584.0 5512088.0 12 56.0 -17433416.0 13 56.0 40042552.0 14 56.0 -18859440.0 15 56.0 -76535224.0 16 56.0 94092360.0 17 56.0 -4189368.0 18 56.0 73840.0 19 56.0 -5807616.0 20 56.0 -9211680.0 21 56.0 20571736.0 22 56.0 -27142288.0 23 56.0 5615112.0 24 56.0 -5616568.0 25 56.0 5743152.0 26 56.0 -73057432.0 27 56.0 -4988200.0 28 56.0 85630584.0 29 56.0 -4706136.0 Most of the strings in df1 are 56 bytes apart: In [14]: In [16]: diffs['df1'].value_counts() Out[16]: 56.0 986109 120.0 13671 -524168.0 215 -56.0 1 -12664712.0 1 41136.0 1 -231731080.0 1 Name: df1, dtype: int64 In [20]: len(diffs['df1'].value_counts()) Out[20]: 7 In contrast the strings in df2 are scattered all over the place: In [17]: diffs['df2'].value_counts().head() Out[17]: -56.0 46 56.0 44 168.0 39 -112.0 37 -392.0 35 Name: df2, dtype: int64 In [19]: len(diffs['df2'].value_counts()) Out[19]: 837764 When these objects (strings) are located sequentially in memory, their values can be retrieved more quickly. This is why the equality comparisons performed by df1['letter'].values == 'ben' can be done faster than those in df2['letter'].values == 'ben'. The lookup time is smaller. This memory accessing issue also explains why there is no disparity in the %timeit results for the value column. In [5]: %timeit df1[df1['value'] == 0] 1000 loops, best of 3: 1.8 ms per loop In [6]: %timeit df2[df2['value'] == 0] 1000 loops, best of 3: 1.78 ms per loop df1['value'] and df2['value'] are NumPy arrays of dtype float64. Unlike object arrays, their values are packed together contiguously in memory. Sorting df1 with df2 = df1.sort_values('letter') causes the values in df2['value'] to be reordered, but since the values are copied into a new NumPy array, the values are located sequentially in memory. So accessing the values in df2['value'] can be done just as quickly as those in df1['value'].
Can generators be recursive?
I naively tried to create a recursive generator. Didn't work. This is what I did: def recursive_generator(lis): yield lis[0] recursive_generator(lis[1:]) for k in recursive_generator([6,3,9,1]): print(k) All I got was the first item 6. Is there a way to make such code work? Essentially transferring the yield command to the level above in a recursion scheme?
Try this: def recursive_generator(lis): yield lis[0] yield from recursive_generator(lis[1:]) for k in recursive_generator([6,3,9,1]): print(k) I should point out this doesn't work because of a bug in your function. It should probably include a check that lis isn't empty, as shown below: def recursive_generator(lis): if lis: yield lis[0] yield from recursive_generator(lis[1:]) In case you are on Python 2.7 and don't have yield from, check this question out.
Max recursion is not exactly what sys.getrecursionlimit() claims. How come?
I've made a small function that will actually measure the max recursion limit: def f(x): r = x try: r = f(x+1) except Exception as e: print(e) finally: return r To know what to expect I've checked: In [28]: import sys In [29]: sys.getrecursionlimit() Out[29]: 1000 However In [30]: f(0) maximum recursion depth exceeded Out[30]: 970 The number is not fixed, always around ~970, and slightly changes between different instances of python (e.g. from within spyder to system cmd prompt). Please note that I'm using ipython on python3. What's going on? Why is the actual limit I'm getting lower than the sys.getrecursionlimit() value?
The recursion limit is not the limit on recursion but the maximum depth of the python interpreter stack.There is something on the stack before your function gets executed. Spyder executes some python stuff before it calls your script, as do other interpreters like ipython. You can inspect the stack via methods in the inspect module. In CPython for me: >>>print(len(inspect.stack())) 1 In Ipython for me: >>>print(len(inspect.stack())) 10 As knbk pointed out in the comments as soon as you hit the stack limit a RecursionError is thrown and the interpreter raises the stack limit a bit to give you a possibility to handle the error gracefully. If you also exhaust that limit python will crash.
Is extending a Python list (e.g. l += [1]) guaranteed to be thread-safe?
If I have an integer i, it is not safe to do i += 1 on multiple threads: >>> i = 0 >>> def increment_i(): ... global i ... for j in range(1000): i += 1 ... >>> threads = [threading.Thread(target=increment_i) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> i 4858 # Not 10000 However, if I have a list l, it does seem safe to do l += [1] on multiple threads: >>> l = [] >>> def extend_l(): ... global l ... for j in range(1000): l += [1] ... >>> threads = [threading.Thread(target=extend_l) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> len(l) 10000 Is l += [1] guaranteed to be thread-safe? If so, does this apply to all Python implementations or just CPython? Edit: It seems that l += [1] is thread-safe but l = l + [1] is not... >>> l = [] >>> def extend_l(): ... global l ... for j in range(1000): l = l + [1] ... >>> threads = [threading.Thread(target=extend_l) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> len(l) 3305 # Not 10000
There isn't a happy ;-) answer to this. There's nothing guaranteed about any of it, which you can confirm simply by noting that the Python reference manual makes no guarantees about atomicity. In CPython it's a matter of pragmatics. As a snipped part of effbot's article says, In theory, this means an exact accounting requires an exact understanding of the PVM [Python Virtual Machine] bytecode implementation. And that's the truth. A CPython expert knows L += [x] is atomic because they know all of the following: += compiles to an INPLACE_ADD bytecode. The implementation of INPLACE_ADD for list objects is written entirely in C (no Python code is on the execution path, so the GIL can't be released between bytecodes). In listobject.c, the implementation of INPLACE_ADD is function list_inplace_concat(), and nothing during its execution needs to execute any user Python code either (if it did, the GIL may again be released). That may all sound incredibly difficult to keep straight, but for someone with effbot's knowledge of CPython's internals (at the time he wrote that article), it really isn't. In fact, given that depth of knowledge, it's all kind of obvious ;-) So as a matter of pragmatics, CPython experts have always freely relied on that "operations that 'look atomic' should really be atomic", and that also guided some language decisions. For example, an operation missing from effbot's list (added to the language after he wrote that article): x = D.pop(y) # or ... x = D.pop(y, default) One argument (at the time) in favor of adding dict.pop() was precisely that the obvious C implementation would be atomic, whereas the in-use (at the time) alternative: x = D[y] del D[y] was not atomic (the retrieval and the deletion are done via distinct bytecodes, so threads can switch between them). But the docs never said .pop() was atomic, and never will. This is a "consenting adults" kind of thing: if you're expert enough to exploit this knowingly, you don't need hand-holding. If you're not expert enough, then the last sentence of effbot's article applies: When in doubt, use a mutex! As a matter of pragmatic necessity, core developers will never break the atomicity of effbot's examples (or of D.pop() or D.setdefault()) in CPython. Other implementations are under no obligation at all to mimic these pragmatic choices, though. Indeed, since atomicity in these cases relies on CPython's specific form of bytecode combined with CPython's use of a global interpreter lock that can only be released between bytecodes, it could be a real pain for other implementations to mimic them. And you never know: some future version of CPython may remove the GIL too! I doubt it, but it's theoretically possible. But if that happens, I bet a parallel version retaining the GIL will be maintained too, because a whole lot of code (especially extension modules written in C) relies on the GIL for thread safety too. Worth repeating: When in doubt, use a mutex!
Applications of '~' (tilde) operator in Python
I just discovered the bitwise complement unary operation in Python via this question and have been trying to come up with an actual application for it, and if not, to determine if it's generally safe to overload the operator (by overriding the __invert__ method) for other uses. The example given in the question fails with a TypeError, and the link provided seems pretty intimidating. Here's some fiddling around to see ~ in use: from bitstring import BitArray x = 7 print(~x) # -8 print(BitArray(int=x, length=4).bin) # '0111' print(BitArray(int=~x, length=4).bin) # '1000' print(~~True, ~~False) # 1 0 for i in range(-100, 100): assert i + ~i == -1 assert i ^ ~i == -1 assert bool(i) == ~~bool(i) Are there any examples of valid use-cases for this operator that I should be aware of? And even if there are, is it generally acceptable to override this operator for types other than int?
The standard use cases for the bitwise NOT operator are bitwise operations, just like the bitwise AND &, the bitwise OR |, the bitwise XOR ^, and bitwise shifting << and >>. Although they are rarely used in higher level applications, there are still some times where you need to do bitwise manipulations, so that’s why they are there. Of course, you may overwrite these for custom types, and in general you are not required to follow any specific semantics when doing so. Just choose what makes sense for your type and what still fits the operator in some way. If the operation is obscure and better explained with a word or two, then you should use a standard method instead. But there are some situations, especially when working with number related types, that could have some mathematical-like operations which fit the bitwise operators, and as such are fine to use those. Just like you would overwrite standard operators like + and - only for meaningful operations, you should try to do the same for bitwise operators. The reason ~~True, ~~False gives you (1, 0) is because the bool type does not define its own __invert__ operation. However, int does; and bool is actually a subtype of int. So bool actually inherits the logic of all bitwise and arithmetical operators. That’s why True + True == 2 etc.
Unpack a Python tuple from left to right?
Is there a clean/simple way to unpack a Python tuple on the right hand side from left to right? For example for j = 1,2,3,4,5,6,7 (1,2,3,4,5,6,7) v,b,n = j[4:7] Can I modify the slice notation so that v = j[6], b=j[5], n=j[4] ? I realise I can just order the left side to get the desired element but there might be instances where I would just want to unpack the tuple from left to right I think.
This should do: v,b,n = j[6:3:-1] A step value of -1 starting at 6
Why is str.strip() so much faster than str.strip(' ')?
Splitting on white-space can be done in two ways with str.strip. You can either issue a call with no arguments, str.strip(), which defaults to using a white-space delimiter or explicitly supply the argument yourself with str.strip(' '). But, why is it that when timed these functions perform so differently? Using a sample string with an intentional amount of white spaces: s = " " * 100 + 'a' + " " * 100 The timings for s.strip() and s.strip(' ') are respectively: %timeit s.strip() The slowest run took 32.74 times longer than the fastest. This could mean that an intermediate result is being cached. 1000000 loops, best of 3: 396 ns per loop %timeit s.strip(' ') 100000 loops, best of 3: 4.5 µs per loop strip takes 396ns while strip(' ') takes 4.5 μs, a similar scenario is present with rsplit and lsplit under the same conditions. Also, bytes objects seem do be affected too. The timings were performed for Python 3.5.2, on Python 2.7.1 the difference is less drastic. The docs on str.split don't indicate anything useful, so, why does this happen?
In a tl;dr fashion: This is because two functions exist for the two different cases, as can be seen in unicode_strip; do_strip and _PyUnicodeXStrip the first executing much faster than the second. Function do_strip is for the common case str.strip() where no arguments exist and do_argstrip (which wraps _PyUnicode_XStrip) for the case where str.strip(arg) is called, i.e arguments are provided. do_argstrip just checks the separator and if it is valid and not equal to None (in which case it calls do_strip) it calls _PyUnicode_XStrip. Both do_strip and _PyUnicode_XStrip follow the same logic, two counters are used, one equal to zero and the other equal to the length of the string. Using two while loops, the first counter is incremented until a value not equal to the separator is reached and the second counter is decremented until the same condition is met. The difference lies in the way checking if the current character is not equal to the separator is performed. For do_strip: In the most common case where the characters in the string to be split can be represented in ascii an additional small performance boost is present. while (i < len) { Py_UCS1 ch = data[i]; if (!_Py_ascii_whitespace[ch]) break; i++; } Accessing the current character in the data is made quickly with by accessing the underlying array: Py_UCS1 ch = data[i]; The check if a character is a white-space is made by a simple array index into an array called _Py_ascii_whitespace[ch]. So, in short, it is quite efficient. If the characters are not in the ascii range, the differences aren't that drastic but they do slow the overall execution down: while (i < len) { Py_UCS4 ch = PyUnicode_READ(kind, data, i); if (!Py_UNICODE_ISSPACE(ch)) break; i++; } Accessing is done with Py_UCS4 ch = PyUnicode_READ(kind, data, i); Checking if the character is whitespace is done by the Py_UNICODE_ISSPACE(ch) macro (which simply calls another macro: Py_ISSPACE) For _PyUnicodeXStrip: For this case, accessing the underlying data is, as it was in the previous case, done with PyUnicode_Read; the check, on the other hand, to see if the character is a white-space (or really, any character we've provided) is reasonably a bit more complex. while (i < len) { Py_UCS4 ch = PyUnicode_READ(kind, data, i); if (!BLOOM(sepmask, ch)) break; if (PyUnicode_FindChar(sepobj, ch, 0, seplen, 1) < 0) break; i++; } PyUnicode_FindChar is used, which, although efficient, is much more complex and slow compared to an array access. For each character in the string it is called to see if that character is contained in the separator(s) we've provided. As the length of the string increases, so does the overhead introduced by calling this function continuously. For those interested, PyUnicode_FindChar after quite some checks, will eventually call find_char inside stringlib which in the case where the length of the separators is < 10 will loop until it finds the character. Apart from this, consider the additional functions that need to already be called in order to get here. As for lstrip and rstrip, the situation is similar. Flags for which mode of striping to perform exist, namely: RIGHTSTRIP for rstrip, LEFTSTRIP for lstrip and BOTHSTRIP for strip. The logic inside do_strip and _PyUnicode_XStrip is performed conditionally based on the flag.
Is there special significance to 16331239353195370.0?
Using import numpy as np I've noticed that np.tan(np.pi/2) gives the number in the title and not np.inf 16331239353195370.0 I'm curious about this number. Is it related to some system machine precision parameter? Could I have calculated it from something? (I'm thinking along the lines of something similar to sys.float_info) EDIT: The same result is indeed reproducible in other environments such as Java, octace, matlab... The suggested dupe does not explain why, though.
pi isn't exactly representable as Python float (same as the platform C's double type). The closest representable approximation is used. Here's the exact approximation in use on my box (probably the same as on your box): >>> import math >>> (math.pi / 2).as_integer_ratio() (884279719003555, 562949953421312) To find the tangent of that ratio, I'm going to switch to wxMaxima now: (%i1) fpprec: 32; (%o1) 32 (%i2) tan(bfloat(884279719003555) / 562949953421312); (%o2) 1.6331239353195369755967737041529b16 So essentially identical to what you got. The binary approximation to pi/2 used is a little bit less than the mathematical ("infinite precision") value of pi/2. So you get a very large tangent instead of infinity. The computed tan() is appropriate for the actual input! For exactly the same kinds of reasons, e.g., >>> math.sin(math.pi) 1.2246467991473532e-16 doesn't return 0. The approximation math.pi is a little bit less than pi, and the displayed result is correct given that truth. OTHER WAYS OF SEEING math.pi There are several ways to see the exact approximation in use: >>> import math >>> math.pi.as_integer_ratio() (884279719003555, 281474976710656) math.pi is exactly equal to the mathematical ("infinite precision") value of that ratio. Or as an exact float in hex notation: >>> math.pi.hex() '0x1.921fb54442d18p+1' Or in a way most easily understood by just about everyone: >>> import decimal >>> decimal.Decimal(math.pi) Decimal('3.141592653589793115997963468544185161590576171875') While it may not be immediately obvious, every finite binary float is exactly representable as a finite decimal float (the reverse is not true; e.g. the decimal 0.1 is not exactly representable as a finite binary float), and the Decimal(some_float) constructor produces the exact equivalent. Here's the true value of pi followed by the exact decimal value of math.pi, and a caret on the third line points to the first digit where they differ: true 3.14159265358979323846264338327950288419716939937510... math.pi 3.141592653589793115997963468544185161590576171875 ^ math.pi is the same across "almost all" boxes now, because almost all boxes now use the same binary floating-point format (IEEE 754 double precision). You can use any of the ways above to confirm that on your box, or to find the precise approximation in use if your box is an exception.
Django CSRF cookie not set correctly
Update 7-18: Here is my nginx config for the proxy server: server { listen 80; server_name blah.com; # the blah is intentional access_log /home/cheng/logs/access.log; error_log /home/cheng/logs/error.log; location / { proxy_pass http://127.0.0.1:8001; } location /static { alias /home/cheng/diandi/staticfiles; } location /images { alias /home/cheng/diandi/images; } client_max_body_size 10M; } Here is nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip_disable "msie6"; # Enable Gzip compressed. gzip on; # Enable compression both for HTTP/1.0 and HTTP/1.1. gzip_http_version 1.1; # Compression level (1-9). # 5 is a perfect compromise between size and cpu usage, offering about # 75% reduction for most ascii files (almost identical to level 9). gzip_comp_level 5; # Don't compress anything that's already small and unlikely to shrink much # if at all (the default is 20 bytes, which is bad as that usually leads to # larger files after gzipping). gzip_min_length 256; # Compress data even for clients that are connecting to us via proxies, # identified by the "Via" header (required for CloudFront). gzip_proxied any; # Tell proxies to cache both the gzipped and regular version of a resource # whenever the client's Accept-Encoding capabilities header varies; # Avoids the issue where a non-gzip capable client (which is extremely rare # today) would display gibberish if their proxy gave them the gzipped version. gzip_vary on; # Compress all output labeled with one of the following MIME-types. gzip_types application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml application/x-javascript font/opentype image/svg+xml image/x-icon text/css text/plain text/javascript text/js text/x-component; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Update 7-15: When copying code to the linux machines, I simply replaced the original source code file but didn't delete the old .pyc files which I don't think will cause trouble right? Here is the view code: from django.contrib.auth import authenticate, login from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from django.shortcuts import render def login_view(request): if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) next_url = request.POST['next'] if user is not None: if user.is_active: login(request, user) if next_url: return HttpResponseRedirect(next_url) return HttpResponseRedirect(reverse('diandi:list')) else: form = {'errors': True} return render(request, 'registration/login.html', {'form': form}) else: form = {'errors': False} return render(request, 'registration/login.html', {'form': form}) I got one of those CSRF cookie not set error from Django, but this is not because I forgot to include the {% csrf_token %} in my template. Here is what I observed: Access login page #1 try Inside the Request Header, the cookie value is: csrftoken=yNG8ZmSI4tr2xTLoE9bys8JbSuu9SD34; In the template: <input type="hidden" name="csrfmiddlewaretoken" value="9CVlFSxOo0xiYykIxRmvbWyN5iEUHnPB"> In a cookie plugin that I installed on chrome, the actual csrf cookie value is set to: 9CVlFSxOo0xiYykIxRmvbWyN5iEUHnPB Access login page #2 try: Inside the Request Header, the cookie value is: csrftoken=9CVlFSxOo0xiYykIxRmvbWyN5iEUHnPB; In the template: <input type="hidden" name="csrfmiddlewaretoken" value="Y534sU40S8iTubSVGjjh9KQl0FXesVsC"> In a cookie plugin that I installed on chrome, the actual csrf cookie value is set to: Y534sU40S8iTubSVGjjh9KQl0FXesVsC The pattern As you can see from the examples above, the cookie value inside the Request Header differs from the actual csrfmiddlewaretoken in the form and the actual cookie value being set. The cookie value of the current request matches the next request header's cookie value. To help debugging, here is a portion of my `settings.py: DJANGO_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ) THIRD_PARTY_APPS = ( 'compressor', 'crispy_forms', 'django_extensions', 'floppyforms', 'multiselectfield', 'admin_highcharts', ) LOCAL_APPS = ( 'diandi_project', 'uer_application', ) INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS MIDDLEWARE_CLASSES = ( 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [str(ROOT_DIR.path('templates'))], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.media', 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] I am using Django 1.9.5 and python 2.7.10. One "solution" I have encountered this problem before, I can clear all my browser cookies and the site will function properly. But this problem will eventually come up again, so I am really hoping someone can help me out (I probably just made a really dumb mistake somewhere). Update Originally, I thought I made some mistakes while overriding the django.contrib.auth.view page, so I wrote my own login page handler and it still causes the issue. Here is the core part of my login template: {% block content %} ... <form method="post" action="{% url 'login' %}"> {% csrf_token %} <div class="form-group"> <label for="username">username</label> <input type="text" class="form-control" id="id_username" name="username"> </div> <div class="form-group"> <label for="password">password</label> <input type="password" class="form-control" id="id_password" name="password"> </div> <input type="submit" class="btn btn-default" value="login" /> <input type="hidden" id="next" name="next" value="" /> </form> ... {% endblock %} On the Linux machines, I have a nginx server setup as a reverse proxy which direct request on port 80 to 8001, and I am running the server using ./manage runserver localhost:8001 This is the only difference I can think of in terms of setup. Otherwise, all of the source code and settings file are identical. I started deleting cookies but not all of them, this is what I see before deleting them: I deleted all the cookies other than djdt and csrftoken, then the page worked. Could the deleted cookies somehow go over some limit which prevent the csrftoken which is further down the list from being set? Here is the cookie value of the image above in the request header: Cookie:PSTM=1466561622; BIDUPSID=6D0DDB8084625F2CEB7B9D0F14F93391; BAIDUID=326150BF5A6DFC69B6CFEBD67CA7A18B:FG=1; BDSFRCVID=Fm8sJeC62leqR8bRqWS1u8KOKg9JUZOTH6ao6BQjXAcTew_mbPF_EG0PJOlQpYD-hEb5ogKK0mOTHvbP; H_BDCLCKID_SF=tJPqoCtKtCvbfP0k-tcH244HqxbXq-r8fT7Z0lOnMp05EnnjKl5M3qKOqJraJJ585Gbb5tOhaKj-VDO_e6u-e55LjaRh2PcM2TPXQ458K4__Hn7zep0aqJtpbt-qJjbOfmQBbfoDQCTDfho5b63JyTLqLq5nBT5Ka26WVpQEQM5c8hje-4bMXPkkQN3T-TJQL6RkKTCyyx3cDn3oyToVXp0njGoTqj-eJbA8_CtQbPoHHnvNKCTV-JDthlbLetJyaR3lWCnbWJ5TMCo1bJQCe-DwKJJgJRLOW2Oi0KTFQxccShPC-tP-Ll_qW-Q2LPQfXKjabpQ73l02VhcOhhQ2Wf3DM-oat4RMW20jWl7mWPQDVKcnK4-Xj533DHjP; BDUSS=5TNmRvZnh2eUFXZDA5WXI5UG1HaXYwbzItaWt3SW5adjE1Nn5XbUVoWHZuYXBYQVFBQUFBJCQAAAAAAAAAAAEAAAC0JtydAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAO8Qg1fvEINXSU; Hm_lvt_a7708f393bfa27123a1551fef4551f7a=1468229606; Hm_lpvt_a7708f393bfa27123a1551fef4551f7a=1468229739; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; BDRCVFR[dG2JNJb_ajR]=mk3SLVN4HKm; BDRCVFR[-pGxjrCMryR]=mk3SLVN4HKm; cflag=15%3A3; H_PS_PSSID=1424_20515_13289_20536_20416_19861_14994_11792; csrftoken=xUgSHybzHeIwusN0GvMgB1ATeRrPgcV1 Since the site functions now, all I have are five cookies instead of 14 like the image above:
Here is the issue: You cannot have a cookie which key contains either the character '[' or ']' I discovered the solution following @Todor's link, then I found out about this SO post. Basically there was a bug in python 2.7.x that does not parse cookies with ']' in the value. The bug was fixed in 2.7.10. I thought it would be good to just confirm this issue. So I dug through all of the cookies and found one with the following key/value: key: BDRCVFR[feWj1Vr5u3D] val: I67x6TjHwwYf0 So I inserted the following cookie locally and submitted to the server: key: test val: BDRCVFR[feWj1Vr5u3D] The login page worked, which means 2.7.10 indeed fixed the bug. But then I realized that the square brackets are actually in the key name not in the value, so I did the following tests: key: [ val: I67x6TjHwwYf0 and key:] val: I67x6TjHwwYf0 Both cookies break the login process and django displays: CSRF cookie not set So either django or a python library it relies on cannot parse cookies with square brackets in names properly. If anybody knows where I should submit this bug please let me know (django or python). I would like to thank everybody who left a comment in the OP: @raphv, @trinchet, @Phillip, @YPCrumble, @PeterBrittain and @Todor. Thank you guys so much for debugging with me! Update: July 20, 2016 This bug is fixed in Django 1.10, just have to wait for the release Update: July 19, 2016 I filed a bug report to Django as the result of this post. We will see if it will be fixed in future releases.
Why does '() is ()' return True when '[] is []' and '{} is {}' return False?
From what I've been aware of, using [], {}, () to instantiate objects returns a new instance of list, dict, tuple respectively; a new instance object with a new identity*. This was pretty clear to me until I actually tested it where I noticed that () is () actually returns False instead of the expected True: >>> () is (), [] is [], {} is {} (True, False, False) and, as expected, this behavior is also manifested when explicitly creating objects with list(), dict() and tuple(): >>> tuple() is tuple(), list() is list(), dict() is dict() (True, False, False) The only relevant piece of information I could find in the docs for tuple() states: [...] For example, tuple('abc') returns ('a', 'b', 'c') and tuple([1, 2, 3]) returns (1, 2, 3). If no argument is given, the constructor creates a new empty tuple, (). Suffice to say, this isn't sufficient for answering my question. So, why do empty tuples have the same identity whilst others like lists or dictionaries do not? *Note, this question is not about what the is operator does, as explained in Understanding Python's “is” operator, but rather, why it behaves as it does in this specific case.
In short: Python internally creates a C list of tuple objects whose first element contains the empty tuple. Every time tuple() or () is used, Python will return the existing object contained in the aforementioned C list and not create a new one. Such mechanism does not exist for dict or list objects which are, on the contrary, recreated from scratch every time. This is most likely related to the fact that immutable objects (like tuples) cannot be altered and, as such, are guaranteed to not change during execution. This is further solidified when considering that frozenset() is frozenset() returns True; like () an empty frozenset is considered an singleton in the implementation of CPython. With mutable objects, such guarantees are not in place and, as such, there's no incentive to cache their zero element instances (i.e their contents could change with the identity remaining the same). Take note: This isn't something one should depend on, i.e one shouldn't consider empty tuples to be singletons. No such guarantees are explicitly made in the documentation so one should assume it is implementation dependent. How it is done: In the most common case, the implementation of CPython is compiled with two macros PyTuple_MAXFREELIST and PyTuple_MAXSAVESIZE set to positive integers. The positive value for these macros results in the creation of an array of tuple objects with size PyTuple_MAXSAVESIZE. When PyTuple_New is called with the parameter size == 0 it makes sure to add a new empty tuple to the list if it doesn't already exist: if (size == 0) { free_list[0] = op; ++numfree[0]; Py_INCREF(op); /* extra INCREF so that this is never freed */ } Then, if a new empty tuple is requested, the one that is located in the first position of this list is going to get returned instead of a new instance: if (size == 0 && free_list[0]) { op = free_list[0]; Py_INCREF(op); /* rest snipped for brevity.. */ One additional reason causing an incentive to do this is the fact that function calls construct a tuple to hold the positional arguments that are going to be used. This can be seen in the load_args function in ceval.c: static PyObject * load_args(PyObject ***pp_stack, int na) { PyObject *args = PyTuple_New(na); /* rest snipped for brevity.. */ which is called via do_call in the same file. If the number of arguments na is zero, an empty tuple is going to be returned. In essence, this might be an operation that's performed frequently so it makes sense to not reconstruct an empty tuple every single time. Further reading: A couple more answers shed light on CPython's caching behaviour with immutables: For integers, another answer that digs in the source can be found here. For strings, a handful of answers can be found here, here and here.
Why does `str.format()` ignore additional/unused arguments?
I saw "Why doesn't join() automatically convert its arguments to strings?" and the accepted answer made me think: since Explicit is better than implicit. and Errors should never pass silently. why does str.format() ignore additional/unused (sometimes accidentally passed) arguments? To me it looks like an error which is passed silently, and it surely isn't explicit: >>> 'abc'.format(21, 3, 'abc', object(), x=5, y=[1, 2, 3]) 'abc' This actually lead my friend to an issue with os.makedirs(path, exist_ok=True) still raising an error even though the docs for os.makedirs() said that exist_ok=True won't raise an error even if path already exists. It turned out he just had a long line with nested function calls, and the exist_ok was passed in to a nested .format() call instead of the os.makedirs().
Ignoring un-used arguments makes it possible to create arbitrary format strings for arbitrary-sized dictionaries or objects. Say you wanted to give your program the feature to let the end-user change the output. You document what fields are available, and tell users to put those fields in {...} slots in a string. The end-user then can create templating strings with any number of those fields being used, including none at all, without error. In other words, the choice is deliberate, because there are practical reasons for allowing more arguments than are converted. Note that the C# String.Formatter implementation that inspired the Python PEP does the same, for those same reasons. Not that the discussion on this part of the PEP is that clear cut; Guido van Rossum at some point tries to address this issue: The PEP appears silent on what happens if there are too few or too many positional arguments, or if there are missing or unused keywords. Missing ones should be errors; I'm not sure about redundant (unused) ones. On the one hand complaining about those gives us more certainty that the format string is correct. On the other hand there are some use cases for passing lots of keyword parameters (e.g. simple web templating could pass a fixed set of variables using **dict). Even in i18n (translation) apps I could see the usefulness of allowing unused parameters to which the PEP author responded that they were still undecided on this point. For use-cases where you must raise an exception for unused arguments you are expected to subclass the string.Formatter() class and provide an implementation for Formatter.check_unused_args(); the default implementation does nothing. This of course doesn't help your friend's case where you used str.format(*args, **kwargs) rather than Formatter().format(str, *args, **kwargs). I believe that at some point the idea was that you could replace the formatter used by str.format() with a custom implementation, but that never came to pass. If you use the flake8 linter, then you can add the flake8-string-format plugin to detect the obvious cases, where you passed in an explicit keyword argument that is not being used by the format string.
What is the best way to remove accents with apache spark dataframes in PySpark?
I need to delete accents from characters in spanish and others languages from different datasets. I already did a function based in the code provided in this post that removes special the accents. The problem is that the function is slow because it uses an UDF. I'm just wondering if I can improve the performance of my function to get results in less time, because this is good for small dataframes but not for big ones. Thanks in advance. Here the code, you will be able to run it as it is presented: # Importing sql types from pyspark.sql.types import StringType, IntegerType, StructType, StructField from pyspark.sql.functions import udf, col import unicodedata # Building a simple dataframe: schema = StructType([StructField("city", StringType(), True), StructField("country", StringType(), True), StructField("population", IntegerType(), True)]) countries = ['Venezuela', 'US@A', 'Brazil', 'Spain'] cities = ['Maracaibó', 'New York', ' São Paulo ', '~Madrid'] population = [37800000,19795791,12341418,6489162] # Dataframe: df = sqlContext.createDataFrame(list(zip(cities, countries, population)), schema=schema) df.show() class Test(): def __init__(self, df): self.df = df def clearAccents(self, columns): """This function deletes accents in strings column dataFrames, it does not eliminate main characters, but only deletes special tildes. :param columns String or a list of column names. """ # Filters all string columns in dataFrame validCols = [c for (c, t) in filter(lambda t: t[1] == 'string', self.df.dtypes)] # If None or [] is provided with column parameter: if (columns == "*"): columns = validCols[:] # Receives a string as an argument def remove_accents(inputStr): # first, normalize strings: nfkdStr = unicodedata.normalize('NFKD', inputStr) # Keep chars that has no other char combined (i.e. accents chars) withOutAccents = u"".join([c for c in nfkdStr if not unicodedata.combining(c)]) return withOutAccents function = udf(lambda x: remove_accents(x) if x != None else x, StringType()) exprs = [function(col(c)).alias(c) if (c in columns) and (c in validCols) else c for c in self.df.columns] self.df = self.df.select(*exprs) foo = Test(df) foo.clearAccents(columns="*") foo.df.show()
One possible improvement is to build a custom Transformer, which will handle Unicode normalization, and corresponding Python wrapper. It should reduce overall overhead of passing data between JVM and Python and doesn't require any modifications in Spark itself or access to private API. On JVM side you'll need a transformer similar to this one: package net.zero323.spark.ml.feature import java.text.Normalizer import org.apache.spark.ml.UnaryTransformer import org.apache.spark.ml.param._ import org.apache.spark.ml.util._ import org.apache.spark.sql.types.{DataType, StringType} class UnicodeNormalizer (override val uid: String) extends UnaryTransformer[String, String, UnicodeNormalizer] { def this() = this(Identifiable.randomUID("unicode_normalizer")) private val forms = Map( "NFC" -> Normalizer.Form.NFC, "NFD" -> Normalizer.Form.NFD, "NFKC" -> Normalizer.Form.NFKC, "NFKD" -> Normalizer.Form.NFKD ) val form: Param[String] = new Param(this, "form", "unicode form (one of NFC, NFD, NFKC, NFKD)", ParamValidators.inArray(forms.keys.toArray)) def setN(value: String): this.type = set(form, value) def getForm: String = $(form) setDefault(form -> "NFKD") override protected def createTransformFunc: String => String = { val normalizerForm = forms($(form)) (s: String) => Normalizer.normalize(s, normalizerForm) } override protected def validateInputType(inputType: DataType): Unit = { require(inputType == StringType, s"Input type must be string type but got $inputType.") } override protected def outputDataType: DataType = StringType } Corresponding build definition: name := "unicode-normalization" version := "1.0" crossScalaVersions := Seq("2.10.6", "2.11.8") organization := "net.zero323" val sparkVersion = "1.6.2" libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % sparkVersion, "org.apache.spark" %% "spark-sql" % sparkVersion, "org.apache.spark" %% "spark-mllib" % sparkVersion ) On Python side you'll need a wrapper similar to this one. If you use 2.0+ keyword_only has been moved to top pyspark module. from pyspark.ml.param.shared import * from pyspark.ml.util import keyword_only from pyspark.ml.wrapper import JavaTransformer class UnicodeNormalizer(JavaTransformer, HasInputCol, HasOutputCol): @keyword_only def __init__(self, form="NFKD", inputCol=None, outputCol=None): super(UnicodeNormalizer, self).__init__() self._java_obj = self._new_java_obj( "net.zero323.spark.ml.feature.UnicodeNormalizer", self.uid) self.form = Param(self, "form", "unicode form (one of NFC, NFD, NFKC, NFKD)") kwargs = self.__init__._input_kwargs self.setParams(**kwargs) @keyword_only def setParams(self, form="NFKD", inputCol=None, outputCol=None): kwargs = self.setParams._input_kwargs return self._set(**kwargs) def setForm(self, value): return self._set(form=value) def getForm(self): return self.getOrDefault(self.form) Build Scala package: sbt +package include it when you start shell or submit. For example for Spark build with Scala 2.10: bin/pyspark --jars path-to/target/scala-2.10/unicode-normalization_2.10-1.0.jar \ --driver-class-path path-to/target/scala-2.10/unicode-normalization_2.10-1.0.jar and you should be ready to go. All what is left is a little bit of regexp magic: from pyspark.sql.functions import regexp_replace normalizer = UnicodeNormalizer(form="NFKD", inputCol="text", outputCol="text_normalized") df = sc.parallelize([ (1, "Maracaibó"), (2, "New York"), (3, " São Paulo "), (4, "~Madrid") ]).toDF(["id", "text"]) (normalizer .transform(df) .select(regexp_replace("text_normalized", "\p{M}", "")) .show()) ## +--------------------------------------+ ## |regexp_replace(text_normalized,\p{M},)| ## +--------------------------------------+ ## | Maracaibo| ## | New York| ## | Sao Paulo | ## | ~Madrid| ## +--------------------------------------+ Please note that this follows the same conventions as built in text transformers and is not null safe. You can easily correct for that by check for null in createTransformFunc.
Lambdas from a list comprehension are returning a lambda when called
I am trying to iterate the lambda func over a list as in test.py, and I want to get the call result of the lambda, not the function object itself. However, the following output really confused me. ------test.py--------- #!/bin/env python #coding: utf-8 a = [lambda: i for i in range(5)] for i in a: print i() --------output--------- <function <lambda> at 0x7f489e542e60> <function <lambda> at 0x7f489e542ed8> <function <lambda> at 0x7f489e542f50> <function <lambda> at 0x7f489e54a050> <function <lambda> at 0x7f489e54a0c8> I modified the variable name when print the call result to t as following, and everything goes well. I am wondering what is all about of that. ? --------test.py(update)-------- a = [lambda: i for i in range(5)] for t in a: print t() -----------output------------- 4 4 4 4 4
In Python 2 list comprehension 'leaks' the variables to outer scope: >>> [i for i in xrange(3)] [0, 1, 2] >>> i 2 Note that the behavior is different on Python 3: >>> [i for i in range(3)] [0, 1, 2] >>> i Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'i' is not defined When you define lambda it's bound to variable i, not its' current value as your second example shows. Now when you assign new value to i the lambda will return whatever is the current value: >>> a = [lambda: i for i in range(5)] >>> a[0]() 4 >>> i = 'foobar' >>> a[0]() 'foobar' Since the value of i within the loop is the lambda itself you'll get it as a return value: >>> i = a[0] >>> i() <function <lambda> at 0x01D689F0> >>> i()()()() <function <lambda> at 0x01D689F0> UPDATE: Example on Python 2.7: Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a = [lambda: i for i in range(5)] >>> for i in a: ... print i() ... <function <lambda> at 0x7f1eae7f15f0> <function <lambda> at 0x7f1eae7f1668> <function <lambda> at 0x7f1eae7f16e0> <function <lambda> at 0x7f1eae7f1758> <function <lambda> at 0x7f1eae7f17d0> Same on Python 3.4: Python 3.4.3 (default, Oct 14 2015, 20:28:29) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = [lambda: i for i in range(5)] >>> for i in a: ... print(i()) ... 4 4 4 4 4 For details about the change regarding the variable scope with list comprehension see Guido's blogpost from 2010. We also made another change in Python 3, to improve equivalence between list comprehensions and generator expressions. In Python 2, the list comprehension "leaks" the loop control variable into the surrounding scope: x = 'before' a = [x for x in 1, 2, 3] print x # this prints '3', not 'before' However, in Python 3, we decided to fix the "dirty little secret" of list comprehensions by using the same implementation strategy as for generator expressions. Thus, in Python 3, the above example (after modification to use print(x) :-) will print 'before', proving that the 'x' in the list comprehension temporarily shadows but does not override the 'x' in the surrounding scope.
Simplifying / optimizing a chain of for-loops
I have a chain of for-loops that works on an original list of strings and then gradually filtering the list as it goes down the chain, e.g.: import re # Regex to check that a cap exist in string. pattern1 = re.compile(r'\d.*?[A-Z].*?[a-z]') vocab = ['dog', 'lazy', 'the', 'fly'] # Imagine it's a longer list. def check_no_caps(s): return None if re.match(pattern1, s) else s def check_nomorethan_five(s): return s if len(s) <= 5 else None def check_in_vocab_plus_x(s,x): # s and x are both str. return None if s not in vocab else s+x slist = ['the', 'dog', 'jumps', 'over', 'the', 'fly'] # filter with check_no_caps slist = [check_no_caps(s) for s in slist] # filter no more than 5. slist = [check_nomorethan_five(s) for s in slist if s is not None] # filter in vocab slist = [check_in_vocab_plus_x(s, str(i)) for i,s in enumerate(slist) if s is not None] The above is just an example and in reality my functions to manipulate the strings are more complicated but they do return the original string or a manipulated one. I could use generators instead of list and do something like this: slist = ['the', 'dog', 'jumps', 'over', 'the', 'fly'] # filter with check_no_caps and no more than 5. slist = (s2 check_no_caps(s1) for s1 in slist for s2 in check_nomorethan_five(s1) if s1) # filter in vocab slist = [check_in_vocab_plus_x(s, str(i)) for i,s in enumerate(slist) if s is not None] Or in one crazy nested generator: slist = ['the', 'dog', 'jumps', 'over', 'the', 'fly'] slist = (s3 check_no_caps(s1) for s1 in slist for s2 in check_nomorethan_five(s1) if s1 for s3 in check_in_vocab_plus_x(s2, str(i)) if s2) There must be a better way. Is there a way to make the chain of for-loop faster? Is there a way to do it with map, reduce and filter? Will it be faster? Imagine that my original slist is very very large like 10s of billions. And my functions are a not as simple as the functions above, they do some computation and do around 1,000 calls per second.
First of all is the overall process that you make on your strings. You are taking some strings and to each of them you apply certain functions. Then you cleanup the list. Let's say for a while that all the functions you apply to strings works at a constant time (it's no true, but for now it won't matter). In your solution you iterate throgh list applying one function (that's O(N)). Then you take next function and iterate again (another O(N)), and so on. So, the obvious way to speed-up is to reduce number of loops. That's not so difficult. The next thing to do is try to optimize your functions. E.g. you use regexp to check whether string has capital letters, but there is str.islower (Return true if all cased characters in the string are lowercase and there is at least one cased character, false otherwise). So, this is the very first attempt to simplify and speed-up your code: vocab = ['dog', 'lazy', 'the', 'fly'] # Imagine it's a longer list. # note that first two functions can be combined in one def no_caps_and_length(s): return s if s.islower() and len(s)<=5 else None # this one is more complicated and cannot be merged with first two # (not really, but as you say, some functions are rather complicated) def check_in_vocab_plus_x(s,x): # s and x are both str. return None if s not in vocab else s+x # now let's introduce a function that would pipe a string through all functions you need def pipe_through_funcs(s): # yeah, here we have only two, but could be more funcs = [no_caps_and_length, check_in_vocab_plus_x] for func in funcs: if s == None: return s s = func(s) return s slist = ['the', 'dog', 'jumps', 'over', 'the', 'fly'] # final step: slist = filter(lambda a: a!=None, map(pipe_through_funcs, slist)) There might be one more thing that can be improved. Currently you iterate through list modifying elements and then filter it out. But if might be faster to filter and then modify. Like this: vocab = ['dog', 'lazy', 'the', 'fly'] # Imagine it's a longer list. # make a function that does all the checks for filtering # you can make a big expression and return its result, # or a sequence of ifs, or anything in-between, # it won't affect performance, # but make sure you put cheaper checks first def my_filter(s): if len(s)>5: return False if not s.islower(): return False if s not in vocab: return False # maybe more checks here return True # now we need modifying function # there is a concern: if you need indices as they were in original list # you might need to think of some way to pass them here # as you iterate through filtered out list def modify(s,x): s += x # maybe more actions return s slist = ['the', 'dog', 'jumps', 'over', 'the', 'fly'] # final step: slist = map(modify, filter(my_filter, slist)) Note also, that in some cases generators, maps and things can be faster, but that is not always true. I believe, that if number of items you filter out is substantial, it might be faster to use a for-loop with append. I would not vouch that it will be faster but you could just try something like this: initial_list = ['the', 'dog', 'jumps', 'over', 'the', 'fly'] new_list = [] for s in initial_list: processed = pipe_through_funcs(s) if processed != None: new_list.append(processed)
Python: PEP 8 class name as variable
Which is the convention according to PEP 8 for writing variables that identify class names (not instances)? That is, given two classes, A and B, which of the following statements would be the right one? target_class = A if some_condition else B instance = target_class() or TargetClass = A if some_condition else B instance = TargetClass() As stated in the style guide, Class Names: Class names should normally use the CapWords convention. But also Method Names and Instance Variables: Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability. In my opinion, these two conventions clash and I can't find which one prevails.
In lack of a specific covering of this case in PEP 8, one can make up an argument for both sides of the medal: One side is: As A and B both are variables as well, but hold a reference to a class, use CamelCase (TargetClass) in this case. Nothing prevents you from doing class A: pass class B: pass x = A A = B B = x Now A and B point to the respectively other class, so they aren't really fixed to the class. So A and B have the only responsibility to hold a class (no matter if they have the same name or a different one), and so has TargetClass. In order to remain unbiased, we as well can argue in the other way: A and B are special in so far as they are created along with their classes, and the classes' internals have the same name. In so far they are kind of "original", any other assignment should be marked special in so far as they are to be seen as a variable and thus in lower_case. The truth lies, as so often, somewhere in the middle. There are cases where I would go one way, and others where I would go the other way. Example 1: You pass a class, which maybe should be instantiated, to a method or function: def create_new_one(cls): return cls() class A: pass class B: pass print(create_new_one(A)) In this case, cls is clearly of very temporary state and clearly a variable; can be different at every call. So it should be lower_case. Example 2: Aliasing of a class class OldAPI: pass class NewAPI: pass class ThirdAPI: pass CurrentAPI = ThirdAPI In this case, CurrentAPI is to be seen as a kind of alias for the other one and remains constant throughout the program run. Here I would prefer CamelCase.
Individual timeouts for concurrent.futures
I see two ways to specify timeouts in concurrent.futures. as_completed() wait() Both methods handle N running futures. I would like to specify an individual timeout for each future. Use Case: Future for getting data from DB has a timeout of 0.5 secs. Future for getting data from a HTTP server has a timeout of 1.2 secs. How do I handle this with concurrent.futures? Or is this library not the right tool? Conclusion AFAIK the solution by mdurant is a good work-around. I think I will use a different library the next time. Maybe asyncio has better support for this. See: https://docs.python.org/3/library/asyncio-task.html#asyncio.sleep
How about implementing your own: wait(dbfutures + httpfutures, timeout=0.5) [fut.cancel() for fut in bdfutures if not fut.done()] wait(httpfutures, timeout=0.7) [fut.cancel() for fut in httpfutures if not fut.done()] (or a while loop with sleep/check or wait with short timeout)
why is a sum of strings converted to floats
Setup consider the following dataframe (note the strings): df = pd.DataFrame([['3', '11'], ['0', '2']], columns=list('AB')) df df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 2 entries, 0 to 1 Data columns (total 2 columns): A 2 non-null object B 2 non-null object dtypes: object(2) memory usage: 104.0+ bytes Question I'm going to sum. I expect the strings to be concatenated. df.sum() A 30.0 B 112.0 dtype: float64 It looks as though the strings were concatenated then converted to float. Is there a good reason for this? Is this a bug? Anything enlightening will be up voted.
Went with the good old stack trace. Learned a bit about pdb through Pycharm as well. Turns out what happens is the following: 1) cls.sum = _make_stat_function( 'sum', name, name2, axis_descr, 'Return the sum of the values for the requested axis', nanops.nansum) Let's have a look at _make_stat_function 2) def _make_stat_function(name, name1, name2, axis_descr, desc, f): @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr) @Appender(_num_doc) def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs): _validate_kwargs(name, kwargs, 'out', 'dtype') if skipna is None: skipna = True if axis is None: axis = self._stat_axis_number if level is not None: return self._agg_by_level(name, axis=axis, level=level, skipna=skipna) return self._reduce(f, name, axis=axis, skipna=skipna, numeric_only=numeric_only) The last line is key. It's kind of funny, as there are about 7 different _reduces within pandas.core. pdb says it's the one in pandas.core.frame. Let's take a look. 3) def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, filter_type=None, **kwds): axis = self._get_axis_number(axis) def f(x): return op(x, axis=axis, skipna=skipna, **kwds) labels = self._get_agg_axis(axis) # exclude timedelta/datetime unless we are uniform types if axis == 1 and self._is_mixed_type and self._is_datelike_mixed_type: numeric_only = True if numeric_only is None: try: values = self.values result = f(values) except Exception as e: # try by-column first if filter_type is None and axis == 0: try: # this can end up with a non-reduction # but not always. if the types are mixed # with datelike then need to make sure a series result = self.apply(f, reduce=False) if result.ndim == self.ndim: result = result.iloc[0] return result except: pass if filter_type is None or filter_type == 'numeric': data = self._get_numeric_data() elif filter_type == 'bool': data = self._get_bool_data() else: # pragma: no cover e = NotImplementedError("Handling exception with filter_" "type %s not implemented." % filter_type) raise_with_traceback(e) result = f(data.values) labels = data._get_agg_axis(axis) else: if numeric_only: if filter_type is None or filter_type == 'numeric': data = self._get_numeric_data() elif filter_type == 'bool': data = self._get_bool_data() else: # pragma: no cover msg = ("Generating numeric_only data with filter_type %s" "not supported." % filter_type) raise NotImplementedError(msg) values = data.values labels = data._get_agg_axis(axis) else: values = self.values result = f(values) if hasattr(result, 'dtype') and is_object_dtype(result.dtype): try: if filter_type is None or filter_type == 'numeric': result = result.astype(np.float64) elif filter_type == 'bool' and notnull(result).all(): result = result.astype(np.bool_) except (ValueError, TypeError): # try to coerce to the original dtypes item by item if we can if axis == 0: result = com._coerce_to_dtypes(result, self.dtypes) return Series(result, index=labels) Holy smokes, talk about an out of control function. Someone needs a refactoring! Let's zoom in on the trouble line(s): if hasattr(result, 'dtype') and is_object_dtype(result.dtype): try: if filter_type is None or filter_type == 'numeric': result = result.astype(np.float64) And you better believe that last line gets executed. Here's some of the pdb trace: > c:\users\matthew\anaconda2\lib\site-packages\pandas\core\frame.py(4801)_reduce() -> result = result.astype(np.float64) (Pdb) l 4796 result = f(values) 4797 4798 if hasattr(result, 'dtype') and is_object_dtype(result.dtype): 4799 try: 4800 if filter_type is None or filter_type == 'numeric': 4801 -> result = result.astype(np.float64) 4802 elif filter_type == 'bool' and notnull(result).all(): 4803 result = result.astype(np.bool_) 4804 except (ValueError, TypeError): 4805 4806 # try to coerce to the original dtypes item by item if we can If you're a non-believer, open up pandas.core.frame.py and put a print "OI" right above line 4801. It should splat out to console :). Note I'm on Anaconda 2, windows. I'm going to go with "bug", to answer your question.
django-debug-toolbar breaking on admin while getting sql stats
Environment:django debug toolbar breaking while using to get sql stats else it's working fine on the other pages, breaking only on the pages which have sql queries. Request Method: GET Request URL: http://www.blog.local/admin/ Django Version: 1.9.7 Python Version: 2.7.6 Installed Applications: [ .... 'django.contrib.staticfiles', 'debug_toolbar'] Installed Middleware: [ ... 'debug_toolbar.middleware.DebugToolbarMiddleware'] Traceback: File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 235. response = middleware_method(request, response) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/middleware.py" in process_response 129. panel.generate_stats(request, response) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/panel.py" in generate_stats 192. query['sql'] = reformat_sql(query['sql']) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/utils.py" in reformat_sql 27. return swap_fields(''.join(stack.run(sql))) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/sqlparse/engine/filter_stack.py" in run 29. stream = filter_.process(stream) Exception Type: TypeError at /admin/ Exception Value: process() takes exactly 3 arguments (2 given)
sqlparse latest version was released today and it's not compatible with django-debug-toolbar version 1.4, Django version 1.9 workaround is force pip to install sqlparse==0.1.19
Why and how are Python functions hashable?
I recently tried the following commands in Python: >>> {lambda x: 1: 'a'} {<function __main__.<lambda>>: 'a'} >>> def p(x): return 1 >>> {p: 'a'} {<function __main__.p>: 'a'} The success of both dict creations indicates that both lambda and regular functions are hashable. (Something like {[]: 'a'} fails with TypeError: unhashable type: 'list'). The hash is apparently not necessarily the ID of the function: >>> m = lambda x: 1 >>> id(m) 140643045241584 >>> hash(m) 8790190327599 >>> m.__hash__() 8790190327599 The last command shows that the __hash__ method is explicitly defined for lambdas, i.e., this is not some automagical thing Python computes based on the type. What is the motivation behind making functions hashable? For a bonus, what is the hash of a function?
It's nothing special. As you can see if you examine the unbound __hash__ method of the function type: >>> def f(): pass ... >>> type(f).__hash__ <slot wrapper '__hash__' of 'object' objects> it just inherits __hash__ from object. Function == and hash work by identity. The difference between id and hash is normal for any type that inherits object.__hash__: >>> x = object() >>> id(x) 40145072L >>> hash(x) 2509067 You might think __hash__ is only supposed to be defined for immutable objects, but that's not true. __hash__ should only be defined for objects where everything involved in == comparisons is immutable. For objects whose == is based on identity, it's completely standard to base hash on identity as well, since even if the objects are mutable, they can't possibly be mutable in a way that would change their identity. Files, modules, and other mutable objects with identity-based == all behave this way.
How to assign member variables temporarily?
I often find that I need to assign some member variables temporarily, e.g. old_x = c.x old_y = c.y # keep c.z unchanged c.x = new_x c.y = new_y do_something(c) c.x = old_x c.y = old_y but I wish I could simply write with c.x = new_x; c.y = new_y: do_something(c) or even do_something(c with x = new_x; y = new_y) Can Python's decorators or other language features enable this kind of pattern? (I could modify c's class as needed)
Context managers may be used for it easily. Quoting official docs: Typical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc. It seems like saving and restoring state is exactly what we want to do here. Example: from contextlib import contextmanager @contextmanager def temporary_change_attributes(something, **kwargs): previous_values = {k: getattr(something, k) for k in kwargs} for k, v in kwargs.items(): setattr(something, k, v) try: yield finally: for k, v in previous_values.items(): setattr(something, k, v) class Something(object): def __init__(self, x, y): self.x = x self.y = y def say_hello(self): print("hello", self.x, self.y) s = Something(1, 2) s.say_hello() # hello 1 2 with temporary_change_attributes(s, x=4, y=5): s.say_hello() # hello 4 5 s.say_hello() # hello 1 2
Pycharm import RuntimeWarning after updating to 2016.2
After updating to new version 2016.2, I am getting RuntimeWarning: Parent module 'tests' not found while handling absolute import import unittest RuntimeWarning: Parent module 'tests' not found while handling absolute import import datetime as dt 'tests' is a package inside my main app package, and I receive these warnings when I try to execute unit tests inside this folder. This issue only came up after updating to 2016.2. Besides the warnings, the remaining code works fine. Edit: This is a known issue - https://youtrack.jetbrains.com/issue/PY-20171. They are suggesting to replace utrunner.py in PyCharm installation folder.
This is a known issue with the 2016.2 release. Progress can be followed on the JetBrains website here. According to this page it's due to be fixed in the 2016.3 release but you can follow the utrunner.py workaround that others have mentioned in the meantime (I downloaded the 2016.1 release and copied the file over from there).
rounding errors in Python floor division
I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one: >>> 8.0 / 0.4 # as expected 20.0 >>> floor(8.0 / 0.4) # int works too 20 >>> 8.0 // 0.4 # expecting 20.0 19.0 This happens on both Python 2 and 3 on x64. As far as I see it this is either a bug or a very dumb specification of // since I don't see any reason why the last expression should evaluate to 19.0. Why isn't a // b simply defined as floor(a / b) ? EDIT: 8.0 % 0.4 also evaluates to 0.3999999999999996. At least this is consequent since then 8.0 // 0.4 * 0.4 + 8.0 % 0.4 evaluates to 8.0 EDIT: This is not a duplicate of Is floating point math broken? since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why a // b isn't defined as / equal to floor(a / b)
As you and khelwood already noticed, 0.4 cannot be exactly represented as a float. Why? It is two fifth (4/10 == 2/5) which does not have a finite binary fraction representation. Try this: from fractions import Fraction Fraction('8.0') // Fraction('0.4') # or equivalently # Fraction(8, 1) // Fraction(2, 5) # or # Fraction('8/1') // Fraction('2/5') # 20 However Fraction('8') // Fraction(0.4) # 19 Here, 0.4 is interpreted as a float literal (and thus a floating point binary number) which requires (binary) rounding, and only then converted to the rational number Fraction(3602879701896397, 9007199254740992), which is almost but not exactly 4 / 10. Then the floored division is executed, and because 19 * Fraction(3602879701896397, 9007199254740992) < 8.0 and 20 * Fraction(3602879701896397, 9007199254740992) > 8.0 the result is 19, not 20. The same probably happens for 8.0 // 0.4 I.e., it seems floored division is determined atomically (but on the only approximate float values of the interpreted float literals). So why does floor(8.0 / 0.4) give the "right" result? Because there, two rounding errors cancel each other out. First 1) the division is performed, yielding something slightly smaller than 20.0, but not representable as float. It gets rounded to the closest float, which happens to be 20.0. Only then, the floor operation is performed, but now acting on exactly 20.0, thus not changing the number any more. 1) As Kyle Strand points out, that the exact result is determined then rounded isn't what actually happens low2)-level (CPython's C code or even CPU instructions). However, it can be a useful model for determining the expected 3) result. 2) On the lowest 4) level, however, this might not be too far off. Some chipsets determine float results by first computing a more precise (but still not exact, simply has some more binary digits) internal floating point result and then rounding to IEEE double precision. 3) "expected" by the Python specification, not necessarily by our intuition. 4) Well, lowest level above logic gates. We don't have to consider the quantum mechanics that make semiconductors possible to understand this.
Convert Python sequence to NumPy array, filling missing values
The implicit conversion of a Python sequence of variable-length lists into a NumPy array cause the array to be of type object. v = [[1], [1, 2]] np.array(v) >>> array([[1], [1, 2]], dtype=object) Trying to force another type will cause an exception: np.array(v, dtype=np.int32) ValueError: setting an array element with a sequence. What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder? From my sample sequence v, I would like to get something like this, if 0 is the placeholder array([[1, 0], [1, 2]], dtype=int32)
Pandas and its DataFrame-s deal beautifully with missing data. import numpy as np import pandas as pd v = [[1], [1, 2]] print(pd.DataFrame(v).fillna(0).values.astype(np.int32)) # array([[1, 0], # [1, 2]], dtype=int32)
IPython 5.0 and key bindings in console
The new release of IPython does not depend any more on readline but uses the pure Python library prompt-toolkit, solving maintenance problems on Apple's and Windows' systems. A new feature is the ability to edit a multi-line code block, using the cursor keys to move freely in the code block — with this power it comes, at least for me, a problem: because a ret inserts a new line in your code, to pass the whole block to the interpreter you have to use the shortcut alt+ret or possibly the less convenient key sequence esc followed by ret. I say, this is a problem, because my terminal emulator of choice is the XTerm and, on many Linux distributions, the shortcut alt+ret is not passed to the application but it is directly used by the XTerm in which IPython is running, to toggle the screen-fullness of the said terminal (@ThomasDickey, xterm's mantainer and co-author pointed out that, by default, xterm doesn't care to send to the application the modifier bit on Enter even when one unbinds the Fullscreen action). For this reason I'd like to modify at least this specific IPython key binding. I've found instructions (sort of) for the previouos versions, the readline based ones, of IPython that do not apply to the new, 5.0 version. What I would need are instructions that lead me to find, in IPython's user documentation, the names of the possible actions that I can bind, the names of the shortcuts to bind with the actions and the procedure to follow to configure a new key binding. Failing to have this type of canonical answer, I may be happy with a recipe to accomplish this specific keybinding, with the condition that the recipe still works in IPython 6.0
You could change xterm's configuration. xterm is configurable (and documented). In the xterm manual, the Default Key Bindings section shows the default binding for this key: Alt <Key>Return:fullscreen() \n\ You can suppress that binding in more than one way: using the omitTranslation resource to suppress the feature setting the fullscreen resource to never However, just suppressing it will not make it send something interesting (xterm ignores the modifier for Enter). Setting a translation resource works, e.g., in your $HOME/.Xdefaults file: *VT100*translations: #override \n\ Alt <Key>Return: string("\033[27;3;13~")
A surprise with 1**math.nan and 0j**math.nan
I'm surprised that >>> import math >>> 1**math.nan 1.0 And while we are at it, also that >>> 0j**math.nan 0j I didn't find any other examples. Is there a reason or some logic I've missed that makes this the right choice? Or is this a slip? I was expecting nan. As for every other number except 1 or 0j. Edit 1: Thanks to jedwards's comment below I have a reference. But I still don't understand why. Why was this decided as the standard? Also, couldn't find reference to 0j**mat.nan... Edit 2: So following the answers below and some other stuff, the logic may be this one: Any calculation involving nan should return nan unless the calculation is always returning the same answer regardless of arguments. In such cases, the fact that we have nan as an argument should not affect the result and we should still get the fixed answer. This certainly explains 1**math.nan and math.nan**0. This also explains why 0**math.nan gives nan and not 0 (since 0**n is 0 for all but when n=0 where it results with 1), and might be stretched to cover why math.nan*0 is nan if we agree that the argument need not be finite. But if this is the logic behind the scene, then 0j**math.nan should have been nan, since 0j**n is 0 for all n's except n=0 where 0j**0 is 1. So... does 0j**math.nan have different reasoning? or is it a problem in implementation?
Quoting this question which in turns quotes IEEE 754 (see Wikipedia), The 2008 version of the IEEE 754 standard says that pow(1,qNaN) and pow(qNaN,0) should both return 1 since they return 1 whatever else is used instead of quiet NaN. For details see page 56 of IEEE 754 2008: pow(x, ±0) is 1 for any x (even a zero, quiet NaN, or infinity) pow(±0, y) is ±∞ and signals the divideByZero exception for y an odd Thus, the reasoning seems to be that no matter what number k is in the exponent, 1^k = 1, 1^Nan should also be 1. Why that reasoning is reasonable (I'm sure it is) I'll need to dig further. Personally, I think this makes sense - Nan doesn't really exist in math, it's just that our floating point representation can't handle it (or, Nan is "the computation is too much, this is some number but not sure which"). Thus, 1^Nan could be 1 to an arbitrary power (not 1 to something that is not a number), but since the answer will always be 1, it can only help if we define 1^Nan to be 1.